Danny Tobey on DLA Piper’s AI practice
Photo by Mike Scarcella/ALM
If you’re new to AI, check out some basic definitions here (and Nicola Shaver is also worth following on LinkedIn while you’re there). Then listen to this April 28, 2023 episode of the Eye on AI podcast Episode #119 (55:41), also here on: YouTube. It’s an interview with Danny Tobey, co-head of DLA Piper’s AI practice (with fellow co-head Bennett Borden).
There are a few great takeaways from the interview with Danny Tobey:
DLA Piper’s use of AI within the law firm: “This is public. We are working with a tool called ‘Co-Counsel’ that we have been beta testing with [Case Text] for some time, and now we’re using it. We have testing pods. We have a huge number of people across the firm taking work that we've done, let's call it the old-fashioned way. And we know those documents, those deals, those litigations intimately, and we are testing against the AI technology to see where it's an add and see where it's a detraction. This technology, GPT-4 broadly, and the specific version we're working with, it's going to transform the legal industry. There's no question.… And we are going to win because a lawyer is not going to be replaced by AI. But lawyers who use AI are going to replace lawyers who don't.”
Cases in the DLA Piper AI practice: (a) AI-enabled product liability litigation and advice; (b) affirmative class action litigation against insurance AI algorithms; (c) IP litigation involving AI; (d) generative AI clients seeking advice on IP fair use; and (e) commercial representations and warranties about AI.
An overview of the regulatory landscape for “old fashioned, task-specific automation AI” (from just 6 months ago): the current state of US legislation and regulation, and US bureau and agency actions, and some discussion of the EU approach through the AI Act.
Questions about the new liability challenges posed by “general purpose AI” like GPT-4: A fiduciary normally stands between an AI product and the consumer to ensure safety and fulfill the manufacturer’s duty of care (the US learned intermediary doctrine) - but what happens when the AI is more learned than the learned intermediary? Does the learned intermediary override the machine or assume that the machine is looking at a trillion data points and it's seen something humans just can't see? Who’s responsible then for an erroneous or allegedly erroneous decision? The data? The programmer? The AI as some kind of agent of someone, the manufacturer or the doctor or the lawyer? Or does the doctor or lawyer or whoever continue to stand as the ultimate final gatekeeper?
Questions about the new regulatory challenges posed by “general purpose AI” like GPT-4: Where does general purpose AI that is unrestricted and can comment on anything under the sun leave regulators? How do regulators test that? How do we externally assess the safety of that?
Existentially important questions: AI is moving faster than anyone thought. Is anyone going to slow down this technology? We're past just saying AI needs to be safe and responsible; now we need to be saying, how do we make it safe? What are the sandboxes? What are the firewalls? What are the forbidden uses? And, most importantly, how do we enforce those?