Daniel "Dazza" Greenwood is an attorney and researcher at MIT Media Lab and Executive Director of law.MIT.edu, specializing in computational law and AI. He advises companies and public sector organizations on legal tech and data rights and, as an attorney, he has consulted for entities like NASA and testified before Congress on laws surrounding electronic transactions.
is currently tackling the challenge of the legal status of AI agency. Recognizing that AI systems often operate along a spectrum of autonomy and accountability, he delineates precise roles and responsibilities for AI agents conducting transactions. By integrating these definitions into contract structures and legal frameworks, Greenwood bridges the gap between abstract AI concepts and their practical, enforceable applications, reaching for clarity and trust in human-AI interactions.The AI Agent Problem
Definitions of “AI agent” floating around the state of the art in AI are legally unhelpful, to put it mildly. From a recent academic perspective out of Princeton titled “AI Agents That Matter”:
Many researchers have tried to formalize the [AI] community’s intuitive understanding of what constitutes an agent in the context of language-model-based systems. Many of them view it as a spectrum — sometimes denoted by the term ‘agentic’ — rather than a binary definition of an agent. We agree with this perspective.
The authors of that paper say that AI is “more agentic” if they can operate in complex environments, be operated easily by humans and allowed to pursue tasks on their own, and have certain ‘design patterns’ like reflection, planning, or tool-use.
Such descriptions of “agents” omit essentially all functional and measurable criteria to separate where the seat of liability resides within automated decision-making. Systems at the “less agentic” end of the spectrum can nevertheless autonomously create obligations for users and harm to third parties. Foundations for determining legal accountability and regulatory compliance need to have clearer boundaries.
Common distinctions between types of AI agents, such as "software agents", "embodied agents", “tool-based agent”, “simulation agent” (see, e.g., Technology Review) are equally unhelpful. While intuitive on some level, the boundaries between the categories blur and overlook hybrids that combine intangible and physical or simple and complex interactions.
In short, without addressing who or what is responsible for an AI agent's actions, there are no guidelines for lawyers to take to protect clients or prosecute claims.
We are Approaching this Problem Fast
From an article in PYMNTS (Nov. 15, 2024) “This Week in AI”:
OpenAI’s New ‘Operator’ AI to Handle Online Shopping
OpenAI is set to launch “Operator,” a groundbreaking AI agent capable of independently browsing the web and completing online transactions. Planned for a January release, the autonomous system will handle tasks from product research to purchases, potentially transforming eCommerce interactions. The move aligns with similar developments from Anthropic and Salesforce, signaling an industry shift toward AI agents that can execute complex tasks with minimal human oversight.
Apple Plans AI Smart Hub for Home Shopping and Control
Apple is set to launch a 6-inch AI-powered smart display as early as March, creating its first central command hub for home automation and shopping. The wall-mountable device will feature a camera, rechargeable battery and speakers, running a hybrid operating system combining Apple Watch and iPhone interfaces. The hub aims to simplify smart home control while enabling voice-command purchases through the Apple ecosystem.
AI Agents as a Service
Current agency law is also lacking in solutions. Greenwood notes that in Restatement (Second) of Agency § 1(1) (1958), agency is defined as “the fiduciary relation which results from the manifestation of consent by one person to another that the other shall act on his behalf and subject to his control, and consent by the other so to act.” Lots of “persons” in that definition of an agent, and we are extremely far from a technology capable of acting as a fiduciary.
Greenwood’s answer is to place the user relying upon an AI Agent (individual or corporation) in the role of the principal under the traditional law of agency (and strongly warns against providers of technology who arrogate that role to themselves).
To fulfil the role of the agent/fiduciary, he combines the AI technology and the technology provider into an “AI Agent System”, which must collectively prioritize the user/principal’s interests above all else. In practice, this means robust data protection maintaining strict confidentiality of their private information and commercial transactions, and mechanisms for error prevention and correction, such as security procedures establishing spending limits, error detection to trigger alerts, and security failures providing grounds for transaction reversal.
As Greenwood concludes: “This approach not only provides principals with greater certainty but also empowers third parties to engage in AI-powered interactions with greater confidence and clarity, unlocking the tremendous benefits of this technology for all.”
His longer breakdown of this idea is well worth reading in full, as is his Substack more generally.