
I. AI is/will be powerful
At some time in the future, if you want it, all disputes will be decided easily, all agreements formed effortlessly, all rules interpreted reasonably—by the AI.
Today, advanced AI models substantially improve efficiency by providing structured starting points, rather than starting from scratch. No associates were engaged to write memos after my strategy meetings last week. In a typical one, a senior colleague had some difficulty articulating a precise response to a niche compliance question from a client. The swift one-page AI draft I mutely presented simply elicited from him: “This is what I intended – let’s use this as the basis for our reply.”
We need to survey the state of the art on the road to thinking machines. OpenAI’s Dec 20th 2024 announcement of the “o3” model and its breakthrough on the ARC prize have moved expectations up significantly. This is frankly such insane progress it's hard to process.
The techno-capitalist flywheel is spinning faster and faster. The obvious trend and strategy among all AI labs following the announcements of “o1” and “o3” is for even faster work. AI analyst Gwern summarizes:
Every problem that an o1 solves is now a training data point for an o3 (eg. any o1 session which finally stumbles into the right answer can be refined to drop the dead ends and produce a clean transcript to train a more refined intuition)… I am actually mildly surprised OA has bothered to deploy o1-pro at all, instead of keeping it private and investing the compute into more bootstrapping of o3 training etc.
China’s DeepSeek and Google DeepMind followed suit with their own 'o3' counterparts last year. Rather than relaxing after a sustained capex sprint in the past two years, the businesses who have the inside view are doubling down on their planned capex spending in 2025. Microsoft will spend $80 billion, versus $64.5 billion on capex in the last year. Amazon is spending $65 billion, Google $49 billion, and Meta $31 billion. Softbank is partnering with OpenAI to launch the “Stargate Project” and spend $100 billion immediately and $500 billion for US AI infrastructure over four years. It isn’t contested that the companies are building up for something.
The activities inside the big AI labs are equally feverish. From Gwern’s summary, the leaders and researchers at OpenAI and others (Altman, roon, Brown, Sutskever, Bryk, Brundage, Apples, etc) are:
… suddenly weirdly, almost euphorically, optimistic on Twitter and elsewhere and making a lot of haha-only-serious jokes Watching the improvement from the original 4o model to o3 (and wherever it is now!) may be why. It's like watching the AlphaGo Elo curves: it just keeps going up... and up... and up...
Despite its impressive real and potential capabilities, AI’s trajectory is not without complexity—something I’ve personally wrestled with. I don't endorse this next sentence lightly; I’ve been sitting with it for some time, not wanting to sound alarmist: I think that any large organization or firm now needs to take the possibility of near-term very powerful AI very seriously.
II. I’ve Been Wrong and I’ve Been Right
In July 2024, I wrote about how LLMs resembled Kahneman’s 'System 1'—fast, instinctive, and emotional—while 'System 2,' the domain of slower, logical reasoning, remained the territory of humans. I was too optimistic. I failed to see that computers have long used both System 1 and System 2, and their integration in a single AI model was inevitable. Today, AI has already demonstrated System 2 capabilities.
That doesn’t mean that AI have a lock on everything that goes along with that mistake. There is a lot diagnosis to do about how wisdom breaks apart into component concepts. (I’ve been very impressed with L Rudolf L’s series of essays on AI & Wisdom.) I believe this conclusion from my July article is still basically correct:
Law firms focused on productivity will hold to a vision of eliminating associates-as-tools and replace them with more productive tools, and then be replaced by in-house counsel. Law firms focused on experienced, problem-defining counsel for clients will need another vision.
The problem just needs a little better definition. As lawyers, what are we going to do? First, what’s required is diagnosis of the problem. In future essays I’ll work on using the diagnosis for overall guiding policies to concentrate action and resources on the solutions. In this essay, I want to look at how much power AI really holds and where, under current elevated expectations, innovation is likely to play out and fall short.
III. Powerful AI Is Not / Will Not Be Omnipotent
To see what’s coming, we need to accept the disruption that AI’s progress will cause, while also avoiding going further into hypothetical mania. For better or worse, the combination of innovations being deployed is a lot. But not all human labor will be magically replaced. AI is not a god. In discourse about the AI era, there’s a line that often gets crossed from what we can reasonably expect from the technology we see, and the assumption that replication of human intelligence (or wisdom) in ‘a cloud’ must be near, and will make the AI all powerful.
Even if near-term powerful AI is created tomorrow, strictly speaking, nobody knows how fast AI adoption will proceed. Historically, even transformative technologies do not yield immediate exponential economic growth. Those expecting short-term “explosive” growth (say, 20–40% real GDP growth per year) are making an extraordinary claim, requiring extraordinary evidence, and neither data nor forecasts currently suggest it.
Further, Silicon Valley does overrate intelligence. You can keep piling on IQ or AI, but eventually something else (energy, manufacturing capacity, stable governance) becomes a bottleneck.
Yet technology has changed traditional legal tasks very quickly. We work without dictation, shorthand, legal pads, telephone calls, libraries, typewriters, fax machines, carbon copies, metal filing cabinets, courier services, and handwritten annotations on case files. Technology will continue to change traditional legal tasks. We've designed machines to enhance our vision, amplify our strength and our voices, mimic the aerodynamics of birds and insects—even traced music into petrochemicals and written algorithms for math and science into silicon. There is no reason that more cognition cannot also be designed into new artifacts, and those artifacts be used to solve a significant number of problems in the way of making even better thinking artifacts.
If AI isn’t omnipotent, then what truly differentiates human lawyers? I think the answer still lies in a subset of wisdom, in the concept of legal insight.
IV. Where It Ends: The Problem of Insight (in AI)
The era of law firms as gatekeepers to access courts and contract templates is over. Law firms will lose whatever small remaining business they have from General Counsel retaining us to handle filing paperwork and SMEs requesting a fresh agreement for a simple and friendly supply contract. You and I don’t need to mourn that passing era.
At our best, a lawyer is a skilled problem-solver who notices critical details that a client who is also skilled at problem-solving might nevertheless overlook. In other words, we have professional insight. There is no AI yet that can beat that professional service offering. Let’s analyze this advantage.
First, we need to define “insightful legal reasoning”. An insight is formed when you break your current way of thinking about a problem and generate more complex ways of thinking that are more powerful. There are four areas related to insightful legal reasoning to look at (drawing from the 4E model of cognition): knowledge of the law, iterative adaptability, ability to use outside tools, and sources, and raw experience.
Lawyers rely on their deep knowledge of the law to understand complex situations to provide insights. However, AI already approximates this deep knowledge of the law, and, in the future, lawyers are likely to be bested by AI in pure legal domain knowledge. Lawyers are also able to adaptably work through a problem, learning and adjusting their approach as they interact with facts and legal challenges to find new angles and approaches. However, humans and AI are already approximately matched in our ability to adapt to and work with new information, and, in the future, humans in general are also likely to lose this contest.
Lawyers also use a variety of external tools and partners—such as AI, abstract mental models, databases, algorithmic software, and teamwork with other experts—to find insights. We’ve got a clear edge in this area today—axiomatically, anything an AI can do, we can use an AI to do. But that edge is going away. As AI disperses through the economy, it will be as simple for an AI to gain immediate access to millions of tools and human partners as it is for us to load a website. As tool users relying on just our thumbs, eyes, and ears, we won’t be able to keep up.
We have a final edge. As human beings, lawyers bring personal insight based on years of hands-on embodied experience both as professionals but also from myriads of experiences from a variety of dynamic interests and desires in domains outside the law. We are organisms, which is too often forgotten. We sustain our identity and functions in changing environments by continuously producing and reproducing our core drives and values in new ways. We evolve and adapt primarily based on our own internal logic rather than through external instruction. We are internally driven and do not depend on teaching to define our goals or our behavior.
The technical word for this is autopoiesis—'self production’. Our goals aren’t given to us at birth; so long as our bodies endure, our goals are an endless waltz between us and the environment we inhabit—so much so that, while our environment shapes us, we humans also actively shape our environment to be more relevant to our goals. In that sense, we can see the problem with machine intelligence in its current state: there’s a gap between the theory-laden instructions in the machine and the environment. Without prompting, a robot tasked to make coffee won’t think about re-organizing the kitchen for efficiency. Without prompting, it won’t think about shopping for beans. An AI is reliant on external instructions to act. In the current normal operations, long before an AI robot’s parts expire, its instructions will expire—it will make the coffee very well, and it will stop.
Can AI be embodied, like us? Yes, most certainly—and with faster and stronger memory, hands, legs, ears, and eyes. But can AI be alive, like us—autopoietic?
In some sense, that’s the final question in this sci-fi narrative arc that we are all riding together. For now, while sincere research continues, I’m aware of no promising research plans or even commercial incentives for autopoietic AI. So, let’s plan for a future in which AI doesn’t wake up. Let’s plan for a future in which AI continues to require external instructions to define its behavior.
After some additional diagnosis to fully understand the likely trajectories of the technology up to the edge of autopoiesis, policy questions await. How can legal training help humans keep up? Should law firms implement AI-awareness programs? What does legal education look like for hybrid AI-human practice? How do we do business? What long-term policies should law firms develop to maintain their relevance in an era of rapid AI advancement? What billing models will still work? Are there new legal service offerings to explore? How can law firms measure the human lawyers’ edge in contextual reasoning and professional intuition? Are there new regulatory frameworks needed for the profession? What positions should firms and bar associations take with policymakers?
Getting these questions right and sharing our prescriptions within the profession is essential if we want to act effectively to shape our future, rather than be swamped under by the technological change.