Bain & Company just released a quarterly survey on AI readiness here. There’s not much of interest in it. (Corporate leaders are continuing to talk a lot about AI, etc., etc.) However, there is an interesting section on the top reasons large companies are not adopting generative AI. The leading reasons this quarter are all about internal unpreparedness: company tech platforms aren’t ready, there’s a lack of support for employees, and an apparent lack of in-house understanding and expertise. Ironically, this is great news for small firms, as none of the issues stopping large enterprises are in any way actually necessary for using generative AI.
The first rule of using generative AI has hopefully already been learned: treat it more or less like you treat Google search. Professionals, in all areas of practice, use and rely upon Google searches all the time. The same rules apply, in terms of not leaking client confidential information through the search or prompt and taking care to avoid security breaches of other types. And, like Google search, the most striking thing about the advent of this incredibly powerful new technology is that the core tech is available essentially for free. (Claude 3.5 Sonnet, another new breakthrough milestone among generative AI, was released last week. It’s also free. Give it a try.)
The second rule of using generative AI is that it’s all currently research and development in every domain of expertise. If you’re an expert at anything, you don’t need to wait for a new software platform, practice group leader, or enterprise vendor. No one knows how to use this technology at its best yet. As Ethan Mollick writes in a recent article on this topic, “The AI labs themselves don’t know what they have built, or what tasks LLMs are best suited for.” That means no one knows how to use it better than you do. No one but you, the expert in what you do, is qualified to do the R&D to get AI to help you with your job.
Here are three examples of experts at very specific things discovering that generative AI can do specific and weirdly useful things in their domain.
Judge Kevin Newsom on the U.S. Circuit Court of Appeals made news for his May 28, 2024 concurring opinion on a case, pondering the use of generative AI by courts to find the most precise version of “ordinary meaning” when the dictionary definition feels off. In his case, what does “landscaping” actually mean? It’s worth your time to read the whole thing.
Ethan Mollick (again) provides examples from his recent article on this same topic. He writes about love letters to job applicants here:
Dan [Shapiro, co-founder of Glowforge] is an expert at building culture in organizations and he credits his job descriptions as one of his secret weapons for attracting talent. But handcrafting these “job descriptions as love letters” is difficult for people who haven’t done it before. So, he built a prompt (one of many Glowforge uses in their organization) to help people do it. Dan agreed to share the prompt, which is many pages long - you can find it here.
Mollick also shares the amazing LLM skill discovered by someone who does a lot of industrial gauge reading:
If you go to many factory floors in the US, you will see a lot of old manufacturing equipment with analog dials and no way to connect them to modern manufacturing software and processes. But it turns out that LLMs can be trained to read gauges, and AI can even make smart decisions about what an anomalous reading might mean and when to alert humans. With expert guidance, I wonder if LLMs will allow older plants to skip over an entire phase of development, the way that cell phones allowed many countries to skip the step of building elaborate landline networks.
Learn what you can from simple prompting of generative AI, share among your colleagues doing similar things (AI-use sharing at a low-pressure monthly lunch works wonders) and only move on to more advanced methods and AI tools when you’ve discovered limitations that you can’t bridge, but really want to.
Quoting Mollick again, “Success is going to come from getting experts to use these systems and share what they learn. For companies, this means figuring out ways to incentivize and empower employees to discover latent sources of expertise and share them. There are tons of barriers to doing this in most companies - politics, legal issues (real or imagined), costs, etc. - but for organizations that succeed, the rewards will be large.”