There is no federal AI regulation in the United States, and the entire country presents a fragmented picture for AI governance. Maryland, California, and Massachusetts have passed numerous AI-related bills that address specific areas like criminal law, budget considerations, or online safety. The Executive Order 14110 in Nov 2023 established a comprehensive suite of regulatory, economic and strategic initiatives in AI, but those initiatives were focused more on information gathering than on governance.
Opportunities from Advanced AI
The United States leads in advanced AI research, development, and investment. The biggest competitors in the AI race now by far are Microsoft, OpenAI, Google, and Meta – all US companies.
Sure, recalling that present regulations are inconsistent, there would be economic benefits if the Federal government occupied the field and knitted together the patchwork of US regulations. The current approach is going to result in inconsistencies that cause headaches for large businesses. But those headaches can be soothed by global dominance in another bleeding edge economic arena.
If US national regulation of AI might marginally inhibit the US global lead in advanced artificial intelligence, the threat from fast and loose technology would need to significantly outweigh the economic opportunity.
National Security Threats from Advanced AI
How about national security? The February 2024 Gladstone report was commissioned by the Department of State’s Bureau of International Security and Non-proliferation to better understand the international security implications of advanced AI development. The authors of the report are from Gladstone AI, a consultancy that is a current member of the US Artificial Intelligence Safety Institute Consortium (AISIC) within the Department of Commerce.
The report makes a case that AI poses potentially catastrophic and existential risks, and calls for bold federal action to mitigate those risks. It doesn’t present strong enough new evidence to convince skeptics who were not already alarmed about AI. But it does lay out the pieces of the puzzle well enough that we can see where the gaps in that puzzle may be.
The report’s arguments aren’t easy to parse, but we can boil it down to the threat arising from a combination of risks and potential harms. That is, how much we should worry should be based on how likely something bad is to happen (risk) and how bad it would actually be if it did happen (harm).
The Risk Argument
The potential for risk is described in the Gladstone report under two categories (1) weaponization, meaning an AI system is controlled by rogue actors, to harmful ends; and (2) ‘loss of control’, meaning that an AI system does something other than what it’s told, to harmful ends.
The report leaves the exact magnitude of the risks of weaponization or ‘loss of control’ open-ended. However, once a base risk is fixed in your imagination, the Gladstone report adds evidence of a few exacerbating real-world factors:
1. Early research on AI systems shows signs of deceptive behaviors (loss of control)
2. AGI or human-level AI systems may be decades or months away, and no one seems to be sure exactly (loss of control)
3. Recent progress in AI research has been increasingly rapid (loss of control)
4. AI research labs like OpenAI, Google, and Anthropic are, anecdotally, acting recklessly with their research (loss of control) and security (weaponization)
5. AI research labs are incentivized by a race dynamic with the other AI research labs to move as fast as they can (loss of control and weaponization)
6. The development of very powerful AI systems cannot be effectively blocked once the data centers and other resources required to create them have improved beyond some threshold and proliferated around the world (loss of control and weaponization)
7. Current laws are unable to deter AI research labs from creating catastrophic harms (loss of control and weaponization)
The Harm Argument
The potential for harm is described in the Gladstone Report almost entirely by rhetorical insinuation: very powerful things can cause great harm. While AI systems today are not “very powerful”, they are more powerful now than they were in the past, and the report provides some reason to believe they will be yet more powerful in the future. As evidence of the trajectory of AI systems toward being “very powerful”, the authors cite the infamous Scaling Laws, and supporting anecdotal concerns from top AI researchers. This is not a totally unreasonable argument.
This 2020 Scaling Law paper, for instance, is no joke, and has held up from its release date to the present. Researchers studied LLMs over seven orders of magnitude in scale and observed “precise power-law scalings” for AI performance as a function of training time, dataset size, model size, and compute budget. There were “no signs of deviation from these trends on the upper end”. In simplified terms, this means with current algorithms, when more data is put into more computers, for more time, more capable AI reliable comes out.
Shockingly Powerful AI to National Legislation
A future AI powerful enough to motivate a government to intervene is sufficient for proactive regulation in the present. This is the missing piece of the puzzle in the Gladstone report. But what does AI “very powerful in the future” mean?
It means a shock. Shocks are sudden and intense disturbances – in this case, a disturbance in the present that changes what we believe about the future. I had such a shock last week after being introduced to the text to music generation service https://suno.com/
Text to dialogue, text to imagery, and text to video all seemed a part of the normal progress of things. For me, for some reason, text to music was different – unexpected. Not what I thought would happen. I’m a little unsure now of what could happen next.
We’re becoming inured to shocks from new AI technology. And yes, the US political landscape is especially tricky to navigate and bills in US are also notoriously hard to pass. And yes, the US has to work out how to reconcile different state approaches into a federal framework that doesn’t deeply scare its leading industries. All that said, if the scaling laws hold as new AI systems are increased in size by another few orders of magnitude, the suddenness with which the personal apprehensions among policymakers grow will determine whether there’s US government intervention.