Imagine you're at a cocktail party. You're chatting with a colleague about a recent matter at work, effortlessly weaving together the background, anecdotes, and the occasional witty aside. Suddenly, you have a strange feeling - as if you're not really the one speaking. The words just... flow, one after another, like a machine.
Daniel Kehlmann wrote an interesting article in The Guardian last week, in which he recounts that anecdote and goes on:
We have made language itself speak… we have demonstrated that for intellectual activities we considered deeply human, we are not needed; these can be automated on a statistical basis, the “idle talk”, to use Heidegger’s term, literally gets by without us and sounds reasonable, witty, superficial and sympathetic – and only then do we truly understand that it has always been like this…
I’ve been thinking about this phenomenon for a long time. Now thoughts are turning to our apparent generational collision with ‘thinking technology’. Or, really, I’ve been thinking about the next generation of any group of people who sometimes think for a living.
Will they be able to carry forward our particular, professional traditions? Many or most of our Latinate lawyerly practices were handed down by contingent necessities in very different technological times. The 14th century Inns of Court clustered around irreplaceable law libraries, and still form the template for modern bar associations in the information era. We can and will work within those traditions, but the directions in which they need to evolve are necessarily going to be adaptations to new technological times.
So, to get to first principles and then build back up with the new technologies that are already here, I’ve been thinking about the basic kind of human work on which the legal system is built. And playing with LLMs. And reading philosophy books.
What is a language model?
Let’s discuss large language models. Broadly, we’re leveraging our language machines to do at least some of the thinking part of what we thought was human work. Kehlman elaborates:
Since I’ve been using the large language model, I can actually perceive it: … I feel on my tongue how one word calls up the next, how one sentence leads to another, and I realise, it’s not me speaking, not me as an autonomous individual, it’s the conversation itself that is happening.
Of course, there is still what Daniel Kahneman calls “System 2”, genuine intellectual work, the creative production of original insights and truly original works that probably no AI can take from us even in the future. But in the realm of “System 1”, where we spend most of our days and where many not-so-first-class cultural products are created, it looks completely different.
Don’t think of a language model as a thing that, itself, can write poems or pretend to be human. A language model is an informational tool that provides you with context dependent, reliable information. Just like Daniel Kahneman’s “System 1” provides you with context dependent, reliable directions to commute to work while you think about something else (using “System 2”).
“Reliable information” may be questioned, given LLM hallucinations. But the reliability of an LLM’s output is not the reliability of a calculator’s output, but rather the messy and contextual reliability of language used-as-a-tool to navigate thrugh a problem.
Language models are being used right now this way to provide mundane utility. Unless you happen to be a very experienced mechanic, if you're dealing with a complex mechanical problem, most of the time the work is in troubleshooting. For everyone else, we can be much, much better with some contextually aware “System 1” information drawn from all the “System 1” mechanic material over the internet and all the manuals and textbooks ever written. This is what language models do, and will do, more and more. They boost an average person to a much higher “System 1” level of marginal productivity.
It’s happening first and fastest with coding. Here are some examples from the 25 July instance of Zvi Mowshowitz’s blog. Read them carefully. Follow the insider discussion on the reddit or X.com links, and satisfy yourself of the signficance of this sincerity.
Coding is seriously much faster now, and this is the slowest it will ever be.
From a reddit post:
“It's mind-blowing how quick I can move now with sonnet 3.5, and I'm not even saying LLMs in general because this is the first one of them that I actually feel this comfortable with. Like, I'm pretty sure I could implement copies of the technical parts of most popular apps in the app store > 10x as fast as I could before LLMs.
I still need to make architectural and infrastructure decisions, but stuff like programming the functionality of a UI component is literally 10x faster right now and this results in such fast iteration speed…
I think what I'm writing here is particularly true for startups, and it's less true for big companies. For the company I work at, while LLMs are still helpful they aren't nearly as helpful as when building new products. I think this is mainly because I can't get the same overview of the architecture and it's therefore difficult to provide the LLM with all the relevant context.”
From replies on X.com:
It’s happening. Sully: “50% of our code base was written entirely by LLMs; expect this to be ~80% by next year. With Sonnet we’re shipping so fast, it feels like we tripled headcount overnight. Not using Claude 3.5 to code? Expect to be crushed by teams who do (us).”
Not only coding, either. Jimmy (QTing Tan): “It can also do hardware related things quite well too, and legal, and logistics (planning), and compliance even. I’ve been able to put off hiring for months.”
In other words, language models are, now and in the future, boosting the marginal productivity of people creating solutions to problems where high levels of informational context are useful.
We once had to train up “System 1” for that, inside our own minds. Or we could use other people for that. Now we’re making language, itself, do the work for us.
What is a lawyer?
A lawyer does a lot of things.
We're tool makers and tool-using problem-solvers. The word "attorney" comes from the Latin "tornare" and "tornus," meaning a lathe or turning tool. We take raw information and combine it with something useful in a research brief to get to an answer in context. We optimize. Sometimes, we even make tools out of ourselves or each other. Junior associates, for instance, are essentially human research engines for senior partners.
But that's only half. "Attorney" also comes through the French "torner," meaning to assign responsibilities to someone acting on behalf of another. We're not just problem-solvers; we're problem-definers. We wear our clients' concerns like a robe, negotiating the complex terrain of their values and objectives. We spot their issues for them; we spend effort identifying the risks that might threaten what our clients prize. We understand our clients’ immediate high-value emergencies and explore what can be done in light of their dispassionate long-term objectives. At our best, I think, lawyers are professionally wise. We think through novel situations (i.e., we use “System 2”) using all our accumulated experience about what we and our clients value as right and good.
We also do a lot of other things. Social interactions, writing, institution building, judgment—there’s a long list. While most of these things right now are beyond the reach of large language models, generative AI, and any sort of technology, it seems like the separation of “legal tools” and “wise counsel” still holds. Wherever the problem is clear, technology can eventually offer a better, more optimized tool. Where someone needs to distill experience through analogy and values and decide what needs doing, rather than how, “lawyer” will hopefully always be synonymous with “wise counsel”.
Why is a lawyer?
As an industry, we’re getting uncomfortably close to something we should avoid. We’re close to equating ourselves and our successors to the tools we use. Something that Nietzsche, de Beauvoir, Arendt, Foucault, Heidegger, and Kaczynski all wrote about. We’re abdicating choices about value to the technology, and the technology doesn’t care. We need to treat the why of what we do more carefully.
If we represent ourselves as productive legal tools for managing client’s legal problems, we need to deliver on that promise; and, if we don’t, we will be as replaceable as an obsolete phone. It’s common to hear anticipation from tech industry entrepreneurs for the day that frustrating, incomprehensible legal problems are automated. There’s no sympathy there for a legal industry that appears from the outside to have erected moats to obfuscate legal problems and charge rent for access to the problem-solving tools.
Unfortunately, in the legal industry there appears to be a dominant, somewhat nihilistic vision of the legal profession. This is from a “Long discussion with a senior partner at a major Bay Area law firm”:
A) [The senior partner] expects legal AI to decimate the profession: “Law firms charge by the hour, and generative AI specifically cuts time for many, many tasks”
B) [They are] unimpressed by most specific legal AI offerings [for law firms]: “ChatGPT with some prompting is still superior than specific tools”
C) Generative AI error rates are acceptable even at 10–20%: “You should see how dumb associates are, the partners have to correct everything anyway and don’t trust associates fully.”
D) The future of corporate law is in-house: “Lots of work will transition to in-house counsel. No need to hire external firms that charge by the hour when twenty minutes with ChatGPT can get you decent results; would personally recommend moving in-house.”
E) The future of law in general? “Good for users in areas of law where services were too expensive for many to afford, e.g., divorces; Terrible for juniors entering the profession; Trial litigation is likely to remain the only human-only zone” […]
Visions for the future are a result of the aspirations of especially powerful people and of society. Or, in the case of a law firm, the aspirations of the partners and culture of the firm. These questions are not only necessary to create a vision, they are the heart of what we do as wise counsel: What is possible? What is acceptable? What are our main objectives?
Law firms focused on productivity will hold to a vision of eliminating associates-as-tools and replace them with more productive tools, and then be replaced by in-house counsel. Law firms focused on experienced, problem-defining counsel for clients will need another vision.
I’m starting to believe that our firm should make the raising of wise counsel the main objective of what we do. This isn’t to say that law firms of wise counsel shouldn’t be (more) profitable or (more) productive than competitors. However, it could be a long-term sustainable profit with significant vulnerability in the short term. Wise counsel, if we can realistically educate and retain them, will be productive as a part of what they are. But productivity won’t be their reason for being.
There isn't anything in our market system that automatically says the right vision—or even a minimally humane vision—is going to emerge. Or prevail. The market will allocate apples and oranges and who's going to get an Apple Watch for Christmas. But it won’t deliver new paradigms for which futures are the most possible, the most achievable, or the most inspiring. So, win or lose, the struggle to do that visionary work is likely to be unproductive and yet valuable beyond measure. It’ll be excellent practice for gaining wisdom.