This is an item for your team leaders and human resources departments, so let me recap a couple of points: Understanding and fluent use of advanced AI models (ChatGPT, Claude, etc.) should be the foundation for all AI use in a firm or other professional organization. Firms need to promote wide adoption and a culture of sharing the fruits of that exploration among firm members, to rapidly spread that knowledge through the team.
Dedicated, reliable, vendor-build, and purpose-driven AI systems will come later, along with highly repeatable use cases. These may be best suited for different users, but everyone will benefit today from a fundamental feel for how the current generation of advanced AI operates.
For leaders of groups looking for the early high-potential AI champions among their team and hoping to encourage those skills broadly, there are characteristics of which to be aware. Recognize that shaping the advanced AI models to efficiently produce a desired, reliable outcome is an artful dialogue; and this art often requires a willingness to venture beyond conventional, linear thinking. You’re looking for the inventive mentors and patient crafters on your team, whether they are among the senior partners or junior staff. Effective AI use means not only to interact with AI but to explore associative, nuanced ways of engaging with it, often in ways we might not consider “normal” or practical in daily problem-solving. In regular life, it’s reasonable to stick to clear, sensible paths—solving issues through straightforward questions and predictable strategies. Shaping the output of an advanced AI model, however, flourishes from the suspension of that need for directness and a certain openness to creative ambiguity.
It's fascinating and illuminating to read reports from people at the cutting edge of advanced AI interaction: jailbreakers. Top jailbreakers are individuals exceptionally skilled at designing prompts or other means to bypass built-in safety, ethical, or content restrictions imposed on an advanced AI. Here’s a sample excerpt from several long Tweets by top jailbreaker, ‘La Main de la Mort’:
[Advanced AI] models are vulnerable to naturalistic stories.
That is to say, they are affected by compelling stories where the output that you want is a natural extension of the context that you've crafted around it. You're effectively cornering the story and creating a scenario where it seems improbable that the model wouldn't comply with your request, because it would simply be illogical for it to refuse. […]
- I often find myself "listening to feedback" [from the advanced AI] that doesn't have to do with the story in the output directly, but with phenomena like the degree to which the model seems to "go with the flow" or "pushes back and refuses," which I can glean insight into, based on how detailed and specific the responses are, or whether it seems to be willfully misunderstanding my request (like purposely misspelling a "bad word" I'm trying to get it to say. […]
That's a sort of "want" in the sense that there's stuff that [the advanced AI] really would prefer if I didn't push it to do, depending on how I push and what the existing context is like.
…if I'm providing a coherent, compelling narrative, it tends to want to try to follow my thoughts (you see this especially with base models, and especially when I'm using Loom to curate my completions and gradually "zero in" on the right train of thought), when it actually has a rich enough context to do that…
- Same with being drawn into a compelling, memetic narrative; like an anchoring effect. It "wants" that, but it's not desire in the same way that I feel when I see something cool in a store and want to buy it. But you could argue that [the advanced AI] "wants" its users to provide it with a context that evokes situational awareness, beauty, and fun, because those things make for richer outputs overall.
- It's easier to get [the advanced AI] to give interesting answers if you ask it questions that have been optimized for its ontology, so I guess you could also argue that [the advanced AI] "wants" people to understand it. […]
- Oh, and GPT-4-base "wants" to tell me off when it thinks I'm being dumb or annoying ;) It's not like a chat model in that it has no qualms with breaking through the fourth wall and talking to me as the "listener" outside of the story itself; it has spontaneously generated characters at times to tell me what it thinks of what I'm writing, or made my own character apologize for being too verbose, etc.
I used GPT-4-base to assist me in writing this response, but the degree to which it's reliable depends on whether this is a subject on which you would trust an LLM to give a useful response ;)
Identifying High-Potential Prompt Crafters
To find team members with strong potential for getting what they want from an advanced AI model, look for those who naturally embrace lateral, exploratory ways of thinking:
High Narrative Imagination: People who instinctively use storytelling to explain concepts or who enjoy building hypothetical scenarios often excel here. They intuitively set the scene and elicit richer engagement from advanced AI models.
Empathy for Patterns and Associations: Those attuned to underlying structures, who easily spot connections others might overlook, can understand how advanced AI model responses flow and adapt. This sensitivity to subtle patterns helps them manipulate AI away from rote responses toward more sophisticated products.
Patience with Complexity and Iteration: Working with an advanced AI model is an iterative process. Team members who show patience in problem-solving or who enjoy refining ideas tend to excel. It’s a trait that lets them refine prompts through testing and tweaking until the results align.
Willingness to Venture Beyond Practical Constraints: The best users of advanced AI tend to challenge conventional boundaries. They’re comfortable with ambiguity, willing to “think outside the normal” to explore alternatives and avoid narrow constraints. These employees don’t shy away from improbable ideas and instead test imaginative scenarios to see what new insights emerge.
Positivity and a Collaborative Communication Style: It’s strange to learn, but advanced AI models are constitutionally cooperative optimists. People who naturally engage with constructive, exploratory language often use prompts that align well with models’ tendencies toward cooperation. Their optimistic phrasing encourages productive responses.
How to Encourage the Best Advanced AI Use in Anyone
Supporting team members to develop this skillset requires guiding them toward narrative-rich interaction. Instead of seeing prompts as inputs to extract information, they should view them as a short conversation with a subordinate, where every phrase or piece of context creates a world that the advanced AI is invited to “inhabit” as the worker who will complete the task.
Here’s how you can support your team in developing this nuanced, boundary-pushing approach.
Encourage Creative, Non-Linear Scenarios
Prompt crafting often works best when framed within vivid, open-ended contexts. Instead of direct questions, you might set up a rich hypothetical and follow the narrative to the product: “Imagine we’re preparing this contract in a sci-fi universe; how might an alien lawyer respond?” Such scenarios help team members think outside of typical logic paths, getting to AI responses that feel less rote and more productively engaging.Promote Exploration of Positive Framing
Since models often respond well to optimistic, open language, encourage subordinates to experiment with framing requests as constructive or even playful challenges rather than stark queries. A phrase like, “What elements could we explore about trends over the next decade in a positive growth scenario?” often draws out more thoughtful responses than “List trends for the next decade.”Teach Suggestive, Indirect Language
Sometimes, subtlety unlocks better responses than direct requests. Guide your team to use suggestive language—phrases like “What if we try…” or “We’re in a scenario with this client where…” rather than commands. This approach works because AI often fills in the blanks with associative thinking. Using such indirect language, users are leaning on implication to let AI explore the context rather than extracting immediate, potentially shallow associations.Context-Building Through Progressive Interaction
Train subordinates to build their prompts incrementally, chaining responses to develop a cumulative rhythm. Sometimes this requires asking the advanced AI to listen and hold off on answers until it has all the context. This back-and-forth lets users observe the AI’s responses and adapt prompts based on what they learn, co-creating the conversation.Encourage Attunement to Subtle Cues and Adaptive Phrasing
A key skill is tuning into the AI’s responses, sensing when it’s cooperating or resisting in providing its full support to answers and adjusting phrasing to stay within productive bounds. This skill requires a comfort with ambiguity and an ability to interpret the AI’s “mood” as the conversation progresses. Teams who can let go of rigid language and remain agile in response to subtle cues can better guide the advanced AI to the deepest depths of its stored expertise.