OpenAI's Flash Coup
Last week, developments at the undisputed leading AI company, OpenAI, were a whirlwind.
While the week closed with many of the pieces in the same places, many interested observers are feeling naïve to have believed that OpenAI was a different kind of company, further combined with rumours of an AI system in OpenAI’s basements that might require a different kind of company to handle.
Let's lightly dissect the situation and its legal significance.
OpenAI's Self-Image
OpenAI was established with a grand vision: to lead the development of AGI, defined as artificial intelligence that can surpass human capabilities in most valuable work (and soon thereafter re-write the rules of society). The founders of OpenAI positioned the company as a counterbalance to DeepMind, after DeepMind’s acquisition by Google raised concerns about the concentration of power. OpenAI committed to prioritizing AI safety and AI decision accountability above all else, even to the point of self-destruction if necessary for the greater good.
An Unconventional Board Structure
So, OpenAI's governance is unique, designed to shield it from typical financial pressures. It has a non-profit board that oversees a U.S. 501(c)(3) Public Charity, obligated to act for humanity's collective benefit. This non-profit owns a for-profit entity, OpenAI GP LLC, which holds a majority stake in OpenAI Global LLC, a capped-profit company that limits investment returns to 100 times original investments. This structure was widely admired among interested observers for its principled approach. But principles live in the realm of theory, and many (non-lawyer) people now feel very naïve about the whole things. Board power is balanced in the real world by capital and labour power, and when the legal principles were tested, capital and labour won the day.
Inevitable Tensions
OpenAI's progress and partnerships have been impressive, making its goal of AGI seem at least a plausible objective. This success, perhaps accelerated by the popularity of ChatGPT, has fed unconfirmed but consistent reports of internal conflicts, brewing for about a year and erupting last week. These tensions essentially reflect classic governance dichotomies: R&D versus commercialization, safety versus implementation, and executive authority versus collective board oversight. Notably, the differing backgrounds of the most prominent figures of Altman and Sutskever highlight a deep-rooted divergence in approaching AGI's development.
The Tipping Point
Amidst the unfolding drama, an unconfirmed Reuters report suggested a secretive breakthrough might have been the catalyst for the OpenAI board's actions. Codenamed Q*, this supposed leap involves a novel AI model blending Q-Learning with A* algorithms, enhancing the system's predictive capabilities, particularly in mathematics. If these rumours are accurate, Altman’s concealment of this powerful breakthrough was at least a part of the flashpoint for the board's drastic intervention. In the larger arc of the company, Q* would also be a significant stride towards the AGI for which OpenAI was founded.