Seeking internal guidelines for your firm’s use of AI? This is not a solved problem, as the use of AI continues to evolve and expand. Of course, if you think you’ve solved it and can share, please let me know! In the meantime, I can suggest two concrete approaches.
Californi-certification
The first approach is to refer to some professional rules of conduct for lawyers, as the California Bar Association did. We lawyers are a conservative and thoughtful about how we behave, and this is a particular example of leveraging existing conduct rules for a new problem is superb. (See here for the Practical Guidance for The Use Of Generative Artificial Intelligence In The Practice Of Law.) Here’s a summary, to give a flavor for a very good general code of practice which could easily be applied in any professional setting:
Ensure Confidentiality and Security: Ensure that any generative AI solution used does not compromise the confidentiality and security of client or customer information. This involves anonymizing customer data, consulting with IT or cybersecurity professionals, and understanding how a product uses inputs. (Cal R Prof Conduct, Rule 1.6, Rule 1.8.2)
Maintain Competence and Diligence: Use generative AI competently, being aware of its benefits and risks. Understand the technology, its limitations, and critically review its outputs for accuracy and bias. Avoid over-reliance on AI at the expense of professional judgment. (Cal R Prof Conduct, Rule 1.1, Rule 1.3)
Comply with Relevant Laws and Responsibilities: Use generative AI in compliance with all applicable laws and codes of conduct in relevant local and larger jurisdictions, including those related to AI, privacy, cross-border data transfer, intellectual property, and cybersecurity. Do not engage in or assist with unlawful conduct. (Cal R Prof Conduct, Rule 8.4, Rule 1.2.1, Rule 8.5)
Implement Supervision and Training: Establish these clear policies regarding the use of generative AI in writing. Ensure compliance with these obligations through appropriate training and supervision addressing both ethical and practical aspects of AI use. (Cal R Prof Conduct, Rule 5.1, Rule 5.2, Rule 5.3)
Communicate AI Use and Implications: Disclose to clients or stakeholders the intention to use generative AI, including how it will be used and the associated benefits and risks. Adhere to any instructions or guidelines that limit AI use. (Cal R Prof Conduct, Rule 1.4, Rule 1.2)
Ethically Manage AI-Related Costs and Billing: If applicable, charge ethically for work produced with the aid of generative AI, focusing on actual time spent. Ensure costs associated with generative AI are in compliance with applicable law and clearly explained in agreements. (Cal R Prof Conduct, Rule 1.5)
Ensure Accuracy and Candor: Review all AI-generated outputs for accuracy before any formal submission or use. Correct any errors and ensure candid communication, especially in legal contexts. Be aware of jurisdictional requirements regarding AI disclosure. (Cal R Prof Conduct, Rule 3.1, Rule 3.3)
Prevent Discrimination, Harassment, and Retaliation: Be vigilant of potential biases in AI and the risks they pose. Engage in continuous learning about AI biases and establish policies to identify, report, and mitigate such biases. (Cal R Prof Conduct, Rule 8.4.1)
Certify What Works for You
The second approach is more bespoke. Your firm or organization is embedded in a situational, subnational, national, or international market with particular industry needs and desires, and a rapidly evolving AI landscape. So, ask your internal AI team or individual AI champion to adopt the mantra: “try everything, be very careful, and report back”.
From the reports, assemble a list of what works and doesn’t work for the context of your business. Keep evaluating and returning to the evaluations over time. Looking at the past twelve months of the AI era, any conclusions about what generative AI must or cannot do would very likely been obviated twice already.
If you have someone in your organization who knows and understands Github, there is an additional tool you can employ to assist with this process.
OpenGPTs describes itself as aiming to provide functionality comparable to OpenAI while enabling customization with: sandboxes, custom actions, tools, analytics, drafts, and sharing. However, the most useful feature may be the ability to switch between GPT 3.5 Turbo, GPT 4, Azure OpenAI, and Claude 2 – with more LLMs likely to be added soon. The ability to select different LLMs gives your teams an opportunity to compare and contrast LLMs in advance of committing to any one company or AI system. Each LLM is accessed through their APIs, and so there’s no monthly subscription needed; your team pays only for the actual uses (at a cost likely to be significantly less than the standard USD 20 per person per month).
Next?
What are your internal pressures to develop an AI policy? Let me know in the comments whether the above approaches would work for your firm or organization, and whether you’ve tried anything similar, or radically different.