Taking Stock of 2023
If you see ghosts of AI Christmas past, present, and future in this list – you may not be hallucinating.
Reflecting on the "sick tapestry of potential" that AI has provided us lawyers this year, I want to step back and take stock of why these developments have been so exciting, so fear-inducing, and so dense with paradox.
The pace and nature of improvement
Scaling up the computing power devoted to AI has yielded at least seventy years of steady progress. We’re only noticing this very recently because of the strangeness of exponential curves. That is the bitter lesson presciently articulated by Professor Richard Sutton way back in March 2019. He wrote:
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation… The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.
In other words, AI isn’t primarily learning because of special human genius. It’s primarily learning because we’re adding (and plan to continue adding) exponentially more and faster computers to the pile.
The scope of issues in the present
The issues surrounding AI products are vast. When the engineers say: “great news— I’ve tested this AI and it looks safe”, why might the lawyers still have a problem? Nine reasons:
Issues around AI business models (massive AI investments, the pan-industry inexperience, AI-supplier volatility, etc.)
Issues around regulated AI-use in law and other professions and businesses
Social externalities (risks related to individual or societal model results or biases)
AI-specific regulation (and market fragmentation)
AI data governance risks (dataset security, data privacy, right to know regulations, etc.)
AI evaluative performance liabilities (shaky test benchmarks, real-world performance issues, hallucinations, consistency of service issues, etc.)
Special cases of commercial terms (usage/licensing, termination, IPR related to dataset sourcing and dataset parroting, etc.)
AI decision accountability (risks of harmful content, model transparency and justification, and explainability and interpretability
Challenges to AI-produced product patents and copyrights
The promise of more in the architecture
While it’s possible that we have reached diminishing returns on the language model paradigm, that won’t stop the thousand-fold effective increases in computational investments already locked in at Google and OpenAI in 2024. And if the paradigm isn’t diminishing …?
Prof. Sutton wrote more in the bitter lesson about why simple but scalable algorithms succeeded where other methods failed:
[T]he actual contents of minds are tremendously, irredeemably complex; …
They are not what should be built in, as their complexity is endless; instead, we should build in only the meta-methods that can find and capture this arbitrary complexity. …
We want AI agents that can discover, like we can, not which contain what we have discovered.