The New US Oversight Agency
Last week the US Congress showed bipartisan agreement with the AI industry that a new oversight agency for AI technology is needed.
Last week the US Congress showed bipartisan agreement with the AI industry that a new oversight agency for AI technology is needed, along with scattered statements that the United States should also lead international regulatory efforts. Enjoy the first of many potential AI-themed hearings of the Senate subcommittee on privacy, technology, and the law here (3m08s) or on YouTube.
Some key summary and comments:
1. The primary witness was Sam Altman (CEO of OpenAI - the company responsible for ChatGPT & GPT-4). His suggestions garnered enthusiastic bipartisan endorsements from Senators Booker (D-NJ), Blumenthal (D-CT), Graham (R-SC), Kennedy (R-LA), and others. People close to Altman say he is genuinely worried about large-scale risks from AI that could be alleviated by new regulations. In the hearing, subcommittee Chair Blumenthal (D-CT) stated to Altman: “Having talked to you privately, I agree … that the prospect of increased danger or risk resulting from even more complex and capable AI mechanisms certainly may be closer than a lot of people appreciate.”
2. What are the AI risks? Much of the conversation related to risks currently posed by AI systems like GPT-4. This was partially summarized by Senator Hawley (R-MO) as: “loss of jobs, invasion of personal privacy on a scale we’ve never before seen, manipulation of personal behavior, manipulation of personal opinions, and potentially the degradation of free elections in America”. The other major current concerns were copyrights and related data and model transparency issues, and the ability of AI to help create novel biological agents or help in other dangerous actions.
3. The discussions of stringent regulations looked a lot like the measures that many AI experts would like to see. Some proposals discussed would create entirely new legal regimes, and included:
No Section 230 immunity from liability for companies producing Foundational AI systems.
Impact assessments and tailored regulation for high-risk uses of AI. This is similar to what the EU’s AI Act is doing.
Scorecards (or “nutrition labels”) for AI systems. This would be, as stated by Chair Blumenthal (D-DT), to “encourage competition based on safety and trustworthiness”.
New scientific and legal tools (including federal causes of action) to define liabilities for currently undefined harms. Examples included the harms of ‘counterfeit persons’ (non-disclosure of an AI as such), collective disinformation, mass cybercrime, and broad copyright harms (related to lack of transparency for data inputs).
A new cabinet-level agency (or international authority modelled on CERN, the IAEA, or the IPCC) with responsibility for licensing large-scale AI systems. Those systems would be defined as above a certain threshold of dangerous capability evaluations (or, if necessary, a certain decreasing level of computational input). Agency safety reviews would be both pre-deployment, similar to the US FDA review-and-approval process, and post-deployment to take licenses away and ensure compliance with the safety standards. Further, the agency may have transparency into AI model architectures and be tasked with understanding emerging AI risks.
Additional independent audits from third-party, scientifically qualified reviewers to determine the actual performance of AI models relative to stated safety thresholds. As an example, Altman offered that OpenAI, itself, continually tests whether its models can self-replicate and exfiltrate into the wild, and these tests should be done independently. Altman described this extra step on top of Agency review in the context of the “Cambrian explosion of new businesses, new products, and new services happening on top of these [AI] models”.
4. Uncertain future emergent risks motivated the witnesses. From Gary Marcus (Professor Emeritus, NYU; AI leader), “Let me just insert, there are more genies yet to come from more bottles.” From Sam Altman, “Where I think the licensing scheme comes in is not for what these models are capable of today, … you don’t need a new licensing agency [for] that…. But, as we head towards artificial general intelligence, and the impact that will have, and the power of that technology, I think we need to treat that as seriously as we treat other very powerful technologies.”