4 Comments
Jan 30Liked by Harold Godsoe

Hrm... Insightful angle as always. This gets me thinking about potential advantages of giving AI systems legal personhood. Semi-rhetorically, can we gain insight from similar issues that arise in corporations due to systematic failures, as opposed to maligned individuals?

Expand full comment
author

Can you elaborate on the similar issues that you have in mind?

In a really, really general sense, the law tends to see (financial) harm + "caused by" + an accountable/legal person, before and to the exclusion of other factors. Whether a (corporate) person is causing that harm due to "malign" or "systemic" reasons isn't easy to distinguish and so not a major factor.

At a very big scope, what is relevant is that the (corporate) person is responsive in all the ways that matter: its actions are the result of deliberative processes and can or should be alterable according to incentives and industry norms and laws and ethics and experience ... AI personhood makes little sense in this framing unless/until AI can consider and respond to legal incentives. (That said, there are all kinds of weird "legal personhood" situations in the world that don't fit this framing, and so more thought on this topic may be needed...)

Expand full comment
Feb 1Liked by Harold Godsoe

Your framing in the second paragraph is really useful. Let me circle back to it. My use of the term "systematic" is as a generalization of the person/corporation dichotomy. Naïvely, I'm thinking of juridicial personhood as a mechanism to let the law graft on a bunch of pre-existing law that we have for natural persons. Your framing highlights the reason why this would be useful in the first place. There are problems that arise out of the system of humans collaborating even when individual humans aren't really at fault. Juridicial personhood lets the weight of natural personhood incentives to apply to these systems. This is my (new) mental framework here.

Now, for ABC Inc. that uses GPT-4 internally, I would guess that the current legal system already sufficiently incentivizes ABC to make sure its use of ChatGPT is kosher. The Grug + ChatGPT system probably is similar in this respect. However, as AI systems become more agentic and/or more powerful, then the ability for them to procure recuperative damages and the like also increases. As long as the legal system remains powerful enough to coercively incentivize an AI-including system, then I would guess that grafting the law relevant to persons onto sufficiently agentic AI-systems provides a quick and reasonably effective way to incentivize said systems.

Now, once you've outstripped the power of the legal system, be that a fleet of OpenAI products purchased by Boeing or some Bostrom superagent, then the point is moot.

Expand full comment
author

I'm not sure if your consciously including the element of control in that framing. "incentivizes ABC to make sure its use of ChatGPT is kosher" and "AI-including system" implies that ABC has control of its AI at least as great as ABC has over its employees and other agents.

At the moment that's a serious consideration to the extent that companies may not know how to control their licensed AI tools the way they know how to control their employees.

It would be another, more intense control issue if ABC's employees were growing in capabilities exponentially.

Expand full comment