AI Politics: Authoriatian-Center, Eliezer Yudkowsky, et al.
MIRI (and AI Safety)'s Appeal to Big Government on AI
I’ve written recently that OpenAI and e/acc represent the revolutionary left of the political space in relation to AI issues, in the authoritarian and libertarian quadrants, respectively. OpenAI appears to have doubled down on its controlling approach to the AI revolution, as evidenced by recent leaks revealing aggressive suppression of internal dissent and the addition of former USCYBERCOM and US NSA Chief, retired U.S. Army General Paul Nakasone, to its board of directors.
These hardening policy stances matter for two reasons: their unusual authenticity and the implications for AI regulation.
Authentic policy positions stand out amidst a sea of superficial statements. Jack Clark, former OpenAI policy director and co-founder of the AI lab Anthropic, notes that “many people in AI policy mak[e] overconfident statements about things they haven't thought that hard about.” In contrast, OpenAI and e/acc deeply understand their positions as demonstrated by a rare commitment to acting out their principles. (Incidentally, Clark’s own transparent uncertainty in this area is as rare and admirable as he implies.)
Moreover, understanding these positions helps predict which jurisdictions might enact meaningful AI regulation. Between OpenAI's authoritarian stance and e/acc's libertarian approach, it's clear that the former is dominant, particularly if OpenAI is finding support from entities like the US National Security Association, while e/acc’s strongest supporters are the venture capitalists at A16Z.
MIRI in the Authoritarian Centre
Today I’m adding another significant position to this landscape: the Machine Intelligence Research Institute (MIRI) (intelligence.org) and its founder Eliezer Yudkowsky. MIRI’s policy objective is clear: to cancel the AI revolution by shutting down all frontier AI systems. On May 29, they released an ambitious policy statement to that effect:
The only way we think we will [shut down the development of frontier AI systems worldwide before it is too late] is if policymakers actually get it, if they actually come to understand that building misaligned smarter-than-human systems will kill everyone, including their children. They will pass strong enough laws and enforce them if and only if they come to understand this central truth. [emphasis in original]
Arguably, no one has thought about AI policy more than Yudkowsky, and no one is losing the political battle as badly. As of now, MIRI’s largest support comes from an anonymous Ethereum cryptocurrency investor. That said, political winds change with time and events and MIRI’s stance is clear and coherent. Other groups that might reap the benefit of such changes in the aspirational authoritarian centre are few, and those that exist carry compromised and confused banners. Quoting Clark again:
MIRI is not in the class of people that make overconfident claims with very little to support the claims - rather, the people behind MIRI have spent decades thinking about AI technology and AI safety and have arrived at a very coherent position. I think it's admirable to describe a policy position clearly and directly and I want to congratulate MIRI for writing this.
Political Winds
Like OpenAI, but with the opposite agenda, MIRI states that it is trying to reach the “subset of policy advisors who have the skill of thinking clearly and carefully about risk, particularly those with experience in national security”. Perhaps unlike OpenAI, MIRI also intends to target its policy at “the general public”.
Consider your natural inclinations toward technological or social progress. What events or influences could compel you to make all progress stop? Would the general public need to be scared by near catastrophe to push that big red button? Or might we all just become exhausted in the next few years with the uncertainty and the rapid pace of change.