I admit to feeling some awe when I see institutional glaciers scraping together all that America has to offer in AI expertise, facilities, and resources in a single, grand, and collaborative environment. In February 2024, the National Institute of Standards and Technology (NIST) formally birthed the new Artificial Intelligence Safety Institute Consortium (AISIC). I’ve argued that proactive, legislative AI rulemaking will necessarily be overtaken by reactive, judicial rulemaking. In contrast with the ponderous EU AI Act, the American government has significantly ramped up efforts to create a nimble evaluation-validation apparatus to enhance its advanced AI rulemaking capabilities. AISIC/NIST only needs to balance business, regulatory, and safety incentives and arrive at new ways to make rules before more automakers, developers, and airlines make lasting precedent with their AI accidents. Here’s a few thoughts on AISIC’s strength and weaknesses.
Background
The National AI Advisory Committee (NAIAC), established in April 2022, guides the United States' approach to AI by advising the U.S. President and the National AI Initiative Office on topics related to AI. Following the October 2023 Executive Order 14110, by November and December 2023, a request for AISIC members was in the Federal Registrar and NAIAC had recommended the operationalization of the U.S. AI Safety Institute (U.S. AISI) under NIST. The U.S. AISI executive leadership team members were announced on February 7, 2024 by the DOC (which contains NIST). The AISIC was formally announced on February 8, 2024 by the DOC.
The Collaboration
The cooperative framework employed by AISIC, facilitated through the Cooperative Research and Development Agreement (CRADA), is not new to NIST. Similar frameworks have been implemented in other sectors, such as the NIST National Cybersecurity Excellence Partnership (NCEP) program under the National Cybersecurity Center of Excellence (NCCoE).
This cooperative approach allows NIST to attract external funds and resources by offering collaborators information sharing and NIST’s own expertise and facilities in return. Consortium members' proprietary information and IP rights are protected under the CRADA, while NIST is permitted to publish all data from the research (CRADA art 4.5) and mandate that any resulting inventions will be dedicated to the public domain (CRADA art 5.1).
The Collaborators
The 239 AISIC consortium members represent every advanced U.S. AI lab and tech firm that would ostensibly by subject to the U.S. regulations developed from the research results they are sharing. (See the list.) This overwhelming representation of commercial incentives among the consortium members sets up a problem for NIST’s neutral objectives.
Working Groups
AISIC has set up five working groups: 1) risk management for generative AI, 2) authentication and management of synthetic content, 3) capability evaluations, 4) red-teaming exercises, and 5) the safety and security of dual-use foundation models. Each are charged with creating frameworks, standards, and testing environments necessary to regular advanced AI technologies. For example, the capabilities eval group is tasked to develop testbeds and create guidance for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of chemical, biological, radiological, and nuclear (CBRN), cybersecurity, autonomous replication, and control of physical systems. Understanding how and what to measure is just the first step toward AI rulemaking, but it’s the right first step.
Universal Standards
Given the U.S. lead in AI and its commercial power, the potential for AISIC to set universally accepted standards for AI regulation is immense, and could be immensely beneficial. But to make AISIC truly extraordinary and correct the apparent skew of commercial interests, much more collaboration from outside the United States is needed. While the deadline for member applications has passed, AISIC is still accepting select applications and even national governments are technically permitted participation (CRADA art 2.2). If it actively invited participation from international labs, tech firms, and governments, AISIC could benefit from a diversity of perspectives and foster a widely legitimatized approach to AI regulation. NIST needs to open the door to worthy international members and significant outside governments.