Alberto Romero’s Substack “The Algorithmic Bridge” is one of the best specialized newsletters on AI industry news. You can subscribe for free here. Last week Romero published a piece titled “There's No AI Race: Google, Microsoft, OpenAI, and Anthropic Were Always on the Same Team” (PDF attached). It’s a valuable insight into the oddly counterintuitive, regulation-seeking behavior of the Big AI companies. They seem to be initiating their own safety regulations, at least in the United States, and here’s a very brief series of excepts from Romero’s article outlining his theory as to why:
The AI companies …
Google, Microsoft, OpenAI, and Anthropic [are] the biggest generative AI players and the only companies building models superior to what exists today (Gemini, GPT-5, and Claude-Next)… (Meta [] isn’t working on anything beyond GPT-4.)
… want to grow, together.
They compete on a purely business level but are far less concerned about outperforming each other than they are about growing together, protected from external threats that might prevent them from thriving….
Their new Frontier Model Forum promotes “safety”.
[The four companies] have partnered to create the Frontier Model Forum (FMF) [an industry body] to ensure the “safe and responsible development of frontier AI models.” The creation of FMF is the latest evidence that reveals they don't really care about race dynamics… At first glance, one may think they’re either extremely altruistic or extremely concerned…
But it’s a masquerade hiding …
What they are actually doing [with the FMF] is masterfully navigating the sea of social phenomena that result from the generative AI wave (partially caused by them) [to] … optimize the likelihood that they remain uninterrupted and undisturbed in doing what they’re doing by either public opinion, the media, or the government.… They will compromise on anything to keep doing their thing (I’m leaving “their thing” in the abstract intentionally).
… existing dubiously-ethical practices
[Part of “their thing” is] engaging in dubiously-ethical practices:
- [unregulated data scraping without attribution, permission, or compensation
- careless design decisions that turn into bias and discrimination once the products are deployed
- hidden underpaid labor that powers the generative AI revolution]
[These practices are] what the FMF is not about, what the FLI and CAIS letters are not about, and what all the talk about regulation coming from AI companies will never be about.