Here’s a nice quote, attributed to Stewart Brand, expert on the future and co-founder of the Long Now Foundation about where to find the active negotiation that will determine what happens to our society next:
If you want to know where the future is being made, look for where language is being invented and lawyers are congregating.
Minds, Interns, and Cogs
Well, the lawyers are congregating here, so let’s look at the language for AI that’s being invented. Drew Breunig has a nice article on parsing the language around AI use cases, His article categorizes AI into three main use types along the axis of relative autonomy: Minds*, Interns, and Cogs.
Minds are speculative but drive investment, policy, strategy, philosophy, and regulation as if they already existed. They are hypothetical super-intelligences sometimes called Artificial General Intelligence (AGI), with an ambitious aim of fully autonomous decision-making and minimal error tolerance. The mere potential for Minds have recently been behind geopolitical machinations, revival of the nuclear energy industry, one of the largest fundraising events in history, a series of international summits, and hours of US Congressional testimony on the motivations and necessary regulation of the big AI labs.
(* Breunig uses the term “Gods” here, but for me that creates dramatic or mystical overtones that aren’t helpful. “Minds” retains the cognitive aspect—the advanced, all-encompassing decision-making capacity of intelligence and autonomy that the term 'Gods' was aiming for. It also retains some ominous weight, but without obviously overstating capability or the dramatic or mystical overtones of infallibility or divinity.)
Interns are real, supervised AI that assist mixed human-AI workflows across almost all sectors while being prone to errors that knowledgeable users can correct. If you’re primarily using Interns for playful interaction, you might instead call them “Toys”. Examples include general tools like ChatGPT and Claude, and more specialized Interns like Thomson Reuters’ CoCounsel for lawyers, GitHub Copilot for coders, Suno for music makers, or Adobe Firefly for designers.
Cogs are the most prevalent AI implementations today, focused on performing a single, well-defined task autonomously within larger systems or pipelines. They require low error rates and are commonly used in enterprise contexts, often powered by small, fine-tuned AI models. Examples include fraud detection in banking, demand forecasting in retail, visual inspection systems for manufacturing, or medical imaging analysis in healthcare.
By dividing AI into these categories, we can think much more easily about the impacts of each, and the corresponding legal restrictions to be considered.
Adding OPENAI’s Levels
Minds, Interns, and Cogs are roughly arranged by the power of the AI, or its autonomy. This framing also corresponds with OpenAI’s 5 Levels of AI progress range from the kind of AI available today that can interact in conversational language with people (Level 1) to AI that can do the work of an organization (Level 5). (These five levels of progress, similar to the levels of self-driving car autonomy and intended for OpenAI’s investors, were first reported by Bloomberg and are worth reading a little more about for the curious.)
Putting those two scales together looks like this:
Cogs (Task Systems)
OpenAI Level 1: Chatbot Models
Interns (Assistance Models)
OpenAI Level 2: Reasoner Models
OpenAI Level 3: Agent Models
Minds (Powerful Models)
OpenAI Level 4: Innovator Models
OpenAI Level 5: Organization Models
The five levels of AI development outlined by OpenAI—starting with basic conversational abilities and reaching the hypothetical capability of organizational automation—add a granular scale of power and impact to the Minds, Interns, and Cogs framework.
Flexible and Future-proof Legal Definitions
But this doesn’t quite get at some important distinctions needed to understand the liabilities and risks of different types of AI. In the above scale of power or autonomy, AI approaching super-intelligence Minds demand minimal error tolerance (or bad things might happen), but that same demand for low error rates applies to Cogs and, in the middle power range, Interns are oddly free to make occasional mistakes.
has another structure for defining AI in the law. He starts with the principle: “If the goal is to make the law flexible and future-proof, I suggest going for ‘definitions by purpose’.” Purpose-driven distinctions can help attorneys frame AI liabilities in terms of reliability demands. This approach makes it possible to assign liability more precisely, whether for narrowly focused Cogs or potentially far-reaching Minds. AI with very narrow purposes have correspondingly high demands for reliability; AI with less defined purposes have lesser demands for reliability.With some minor changes here for readability, Sun proposes:
Suggested definition: “artificial intelligence model”
means a particular design or version of a product that implements artificial intelligence (but is not an artificial intelligence system) –
where “artificial intelligence” means: (a) machine learning (a method whereby a machine derives patterns or inferences from data to generate predictive outputs without being explicitly programmed to do so (adapted from ISO/IEC 22989:2022); or (b) any other engineered method or functionality that is designed to generate predictive outputs for a given set of objectives or parameters.
[…] As an example, a large language model [e.g., GPT-4] could be referenced as an “artificial intelligence model that processes or generates natural language text”.
Suggested definition: “artificial intelligence system”
means an application or process that uses an artificial intelligence model to carry out a purpose or function for which the application or process was deployed (adapted from ISO/IEC/IEEE 29119-1:2022) –
where “artificial intelligence” means: (a) machine learning (a method whereby a machine derives patterns or inferences from data to generate predictive outputs without being explicitly programmed to do so; adapted wording from ISO/IEC 22989:2022); or (b) any other engineered method or functionality that is designed to generate predictive outputs for a given set of objectives or parameters.
[…] As an example, generative AI [e.g, Dall-e] could be referenced as an “artificial intelligence system that generates multi-media content”.
To fit in some previous examples, for the least autonomous:
AI System: Basic task-based Systems (Cog AI Systems), such as fraud detection in banking.
AI Model: Basic Chat Models, like GPT-3.
And for the most sophisticated systems in currently in use:
AI System: Specialized Assistants (Intern AI Systems), such as Thomson Reuters’ CoCounsel for lawyers.
AI Model: Frontier Models (Intern AI Models) like GPT-4o.
Autonomy vs Reliability
There isn’t going to be a pat definition of AI so long as innovations have no time to settle, underlying technology shifts between symbolic and ML paradigms, and systems and models are put to vastly different purposes (or no particular purpose at all). But putting it all together we can see the shape of the present and the future.
Each of the points in the top left triangle of the chart connected by dotted lines represents the spectrum of reliability demands, autonomy levels, and legal considerations that AI development raises, especially as technology edges closer to fully autonomous systems. We can see what we need to define and what we need to prepare to encompass, going forward. There will need to be a distinction in both regulation and commercial contracts between AI that demands reliability (even with human oversight) and AI that can be used effectively with only moderate human supervision, but that distinction will converge on very strict liabilities as innovations and progress continues to be made.
This framework, along with the accompanying chart, offers a basis not only for current legal interpretation but also as a forward-looking tool to guide AI-related policy and legal benchmarks as these technologies mature. Where to set your own benchmarks and legal definitions is going to be a fact and risk dependent exercise, but this framing is a good place to start.