The news is suddenly awash with stories of an AI progress stalling out.
In brief, this is a pullback on insider expectations from the most optimistic scenario of building powerful AI “Minds” within a few years. The result is merely putting together existing elements to build capable AI “Interns” within the same period. At the same time, AI researchers are welcoming the new “age of wonder and discovery” in which they find themselves. Regardless of which reception to adopt on this news, the continued integration of AI into professional firms should proceed apace, as “Interns” is what we were all practically preparing for.
In a little more detail, the major news stems from OpenAI CEO Sam Altman stating that no model called GPT-5 would be launched this year. Instead, OpenAI introduced models like GPT-4o in May and o1 in September 2024, which for all their strengths, showed a much smaller quality jump than GPT-3 to GPT-4 in 2023. The Information first reported that OpenAI's next model, Orion, "exceeded prior models" but not by much. Similarly, Reuters noted "delays and disappointing outcomes" in attempts to outperform GPT-4 across leading labs. Meanwhile, Bloomberg reported that Google’s upcoming Gemini iteration and Anthropic’s Claude 3.5 Opus were underperforming expectations or facing delays.
Harsh and blunt cries of I-told-you-so are flowering around predictions of a prolonged AI plateau and questions about the economic viability of the AI industry.
These developments collectively shine a light on the well-known exponential s-curve for LLM “scaling” on internet data, but do not herald a cessation of the growth of LLMs along other axes: inference compute, synthetic data, fine-tuning, multimodal capabilities, or better integration into workflows. To be sure, while corporate valuations may be wildly and inflated on the possibility of endlessly more intelligent chatbots, the AI industry is already producing profitable products. The AI industry will still grow.
Why is this stalling out happening?
Inside the AI industry, the hope that feeding the internet to the AI alone would yield superintelligent AI “Minds” was always a feint hope. Eric Hole puts it succinctly:
Perhaps we should not dwell too closely on the question of why it is that the sum of human civilization’s data fed into deep learning techniques plateaus at something smart—but not that smart—lest we see ourselves too clearly in the mirror.
Since soon after the original release of GPT-4, at least a dozen smaller, but well funded, AI labs have been exploring different paradigms to find ways around this likely roadblock.
OpenAI’s departed co-founder and former chief scientist was seeking green fields to research almost as soon as GPT-4 was firmly secure in late 2023. Sutskever left OpenAI and co-founded the stealth AI lab “Safe Superintelligence (SSI)” at the beginning of this year, and recently explained to Reuters, “The 2010s were the age of scaling. Now we're back in the age of wonder and discovery once again.”