US Senate pt. II
On July 25, the US Senate held the 2nd hearing on “Oversight of AI” convened by subcommittee chair Sen. Blumenthal.
The witnesses were Dario Amodei, CEO of Anthropic; Yoshua Bengio, Turing Award winner and the second-most cited AI researcher in the world; and Stuart Russell, renowned Professor at UC Berkeley and co-author of the standard textbook for AI.
The discourse has progressed since the first hearing two months ago. A new oversight agency for AI technology is now explicit and the United States is openly leading international regulatory efforts. You can find this hearing on C-Span here.
Some key comments from the Chair and witnesses:
The US AI Agency
Senator Blumenthal: I’ve come to the conclusion that we need some kind of regulatory agency, but not just a reactive body… actually investing proactively in research… We need to be creative about the kind of agency, or entity, the body… I think the language is less important than its real enforcement power and the resources invested in it.
Stuart Russell: [N]o government agency is going to be able to match the resources that are going into the creation of these AI systems — the numbers I’ve seen are roughly 10 billion dollars a month going into AGI startups… [but] There’s no doubt that we’re going to have to have an agency. If things go as expected, AI is going to end up being responsible for the majority of economic output in the United States, so it cannot be the case that there’s no overall regulatory agency.
All Speakers Reiterated “Testing and Auditing”
Senator Blumenthal: Building on our previous hearing, I think there are core standards that we are building bipartisan consensus around … A testing and auditing regimen by objective 3rd parties or by preferably the new entity that we will establish; …
Dario Amodei: … we should recognize that the science of testing and auditing for AI systems is in its infancy… thus it is important to fund both measurement and research on measurement to ensure a testing and auditing regime is actually effective… for all the risks ranging from those we face today, like misinformation… to the biological risks that I’m worried about in 2 or 3 years, to the risks of autonomous replication that are some unspecified period after that. All of those can be tied to different kinds of tests… that strikes me as a scaffolding on which we can build lots of different concerns… I think without such testing, we’re blind…
The US International Lead in AI
Yoshua Bengio: [T]here are a few countries… that have really important concentration of talent in AI. In Canada we’ve contributed a lot… there’s also a lot of really good European researchers in the UK and outside the UK.
Stuart Russell: I think the closest competitor we have is probably the UK in terms of making advances in basic research, both in academia and in DeepMind in particular… I’ve spent a fair amount of time in China, I was there a month ago talking to the major institutions that are working on AGI, and my sense is that we’ve slightly overstated the level of threat that they currently present — they’ve mostly been building copycat systems that turn out not to be nearly as good as the systems that are coming out from Anthropic and OpenAI and Google. But the intent is definitely there… I don’t think anywhere else is in the same league as [Anthropic and OpenAI and Google]. Russia in particular has been completely denuded of its experts and was already well behind.
Repeated Mentions of “the Urgency”
Yoshua Bengio: … advancements have led many top AI researchers, including myself, to revise our estimates of when human-level intelligence could be achieved. Previously thought to be decades or even centuries away, we now believe it could be within a few years or decades.
Dario Amodei: And the final thing I would emphasize is I don’t think we have a lot of time… To focus people’s minds on the biorisks — I would really target 2025, 2026, maybe even some chance of 2024 — if we don’t have things in place that are restraining what can be done with AI systems, we’re gonna have a really bad time.