I’ve previously recommended Wharton Business School's Prof. Ethan Mollick as the best source for regular excellent insights into A.I. I also think Ezra Klein’s column and podcast in the NYT is an excellent source to find the leading edge of common discourse on a lot of topics.
Last week, Klein began a three-part series on A.I. by bringing Prof. Mollick to his podcast. It’s worth listening (or reading) the entire conversation more than once to absorb the best common sense A.I. consensus of the moment.
In the meantime, I’ve collected seven of the most interesting tips and a few insights as quotes from Prof. Mollick, here:
A.I. Tip 1: Use Unadulterated A.I.; Not Specialized Apps
I recommend working with one of the [foundation] models as directly as possible, through the company that creates them. And there’s a few reasons for that. One is you get as close to the unadulterated personality as possible. And second, that’s where features tend to roll out first.
A.I. Tip 2: Use it Continuously for Ten Hours
[T]here’s a lot of things that get in your way as a writer. So I would get stuck on a sentence. I couldn’t do a transition. [I’d ask the A.I.:] “Give me 30 versions of this sentence in radically different styles.” There’s 200 different citations. I had the A.I. read through the papers that I read through, write notes on them, and organize them for me. I had the A.I. suggest analogies that might be useful. I had the A.I. act as readers, and in different personas, read through the paper from the perspective of, “Is there some example I could give that’s better? Is this understandable or not?”
… And that’s very typical of the kind of way that I would, say, bring it to the table. Use it for everything, and you’ll find its limits and abilities.
… 10 hours is as arbitrary as 10,000 steps. Like, there’s no scientific basis for it. This is an observation. But it also does move you past the, “I poked at this for an evening”, and it moves you towards using this in a serious way. I don’t know if 10 hours is the real limit, but it seems to be somewhat transformative.
A.I. Tip 3: It is good, but it’s not as good as you
It is, say, at the 80th percentile of writers based on some results, maybe a little bit higher. In some ways, if it was able to have that burst of insight and to tell you “this chapter is wrong, and I’ve thought of a new way of phrasing it”, we would be at that sort of mythical AGI level of A.I. as smart as the best human. And it just isn’t yet.
… The key is to use it in an area where you have expertise, so you can understand what it’s good or bad at, learn the shape of its capabilities.
… [W]e call it the jagged frontier of A.I., that it’s good at some stuff and bad at other stuff. It’s often unexpected. It can lead to these weird moments of disappointment, followed by elation or surprise. And part of the reason why I advocate for people to use it in their jobs is, it isn’t going to outcompete you at whatever you’re best at.
[Ezra Klein: This is the thing I have the most trouble keeping in my mind, that I need to use the A.I. as an imaginative, creative partner and not as a calculator that uses words.]
… For right now, we have a prosthesis for thinking. That’s, like, new in the world. We haven’t had that before — I mean, coffee, but aside from that, not much else. And I think that there’s value in that. I think [we need to be] learning to be partner with this, and [learning] where it can get wisdom out of you or not … it asks good questions.
Prompting Tip 1: Chain of thought
[T]he idea, basically, of chain of thought, that seems to work well in almost all cases, is that you’re going to have the A.I. work step by step through a problem. First, outline the problem, you know, the essay you’re going to write. Second, give me the first line of each paragraph. Third, go back and write the entire thing. Fourth, check it and make improvements.
And what that does is — because the A.I. has no internal monologue, it’s not thinking. When the A.I. isn’t writing something, there’s no thought process. All it can do is produce the next token, the next word or set of words. And it just keeps doing that step by step. Because there’s no internal monologue, this in some ways forces a monologue out in the paper. So it lets the A.I. think by writing before it produces the final result. And that’s one of the reasons why chain of thought works really well.
Prompting Tip 2: Few Shot
One of the techniques you [can use] to shape it [is] called few-shot, which is giving an example. So the two most powerful techniques are chain of thought, which we just talked about, and few-shot, giving it examples. Those are both well supported in the literature. And then, I’d add personas.
Prompting Tip 3: Give your AI a Persona
So this is actually almost more of a technical trick, even though it sounds like a social trick. When you think about what A.I.s have done, they’ve trained on the collective corpus of human knowledge. And they know a lot of things. And they’re also probability machines. So when you ask for an answer, you’re going to get the most probable answer, sort of, with some variation in it. And that answer is going to be very neutral.
If you’re using GPT-4, it’ll probably talk about a rich tapestry a lot. It loves to talk about rich tapestries. If you ask it to code something artistic, it’ll do a fractal. It does very normal, central A.I. things.
So part of your job is to get the A.I. to go to parts of this possibility space where the information is more specific to you, more unique, more interesting, more likely to spark something in you yourself. And you do that by giving it context, so it doesn’t just give you an average answer. It gives you something that’s specialized for you.
The easiest way to provide context is a persona: [e.g.,] “You are an expert at interviewing, and you answer in a warm, friendly style. Help me come up with interview questions.”
It won’t be miraculous in the same way that we were talking about before. If you say you’re Bill Gates, it doesn’t become Bill Gates. But that changes the context of how it answers you. It changes the kinds of probabilities it’s pulling from and results in much more customized and better results.
[Ezra Klein: OK, but this is weirder, I think, than you’re quite letting on here. … there’s a study that gives a bunch of different personality prompts to one of the systems, and then tries to get it to answer 50 math questions. And the way it got the best performance was to tell the A.I. it was a Starfleet commander who was charting a course through turbulence to the center of an anomaly… I mean, what the hell is that about?]
[W]e’re just scratching the surface, right? There’s a nice study actually showing that if you emotionally manipulate the A.I., you get better math results. So telling it your job depends on it gets you better results. Tipping, especially $20 or $100 — saying, I’m about to tip you if you do well, seems to work pretty well.
It performs slightly worse in December than May, and we think it’s because it has internalized the idea of winter break.
Prompting Tip 4: Don’t Worry About It So Much
[But] what I actually advise people to do is just not worry about it so much, because I think then it becomes magic spells that we’re incanting for the A.I. … [A]cting with it conversationally tends to be the best approach. And personas and contexts help, but as soon as you start evoking spells, I think we kind of cross over the line into, “who knows what’s happening here?”
… But the other factor that’s also super weird, while we’re on the way of super weird A.I. things, is that if you don’t do that, it’s going to still figure something out about you. It is a cold reader.… So part of why I like assigning a personality is to have an explicit personality you’re operating with, so it’s not trying to cold read and guess what personality you’re looking for.
… You keep wanting to not talk about the future. And I totally get that. But I think when we’re talking about learning something, where there is a lag, where we talk about policy — should prompt crafting be taught in schools? I think it matters to think six months ahead. And again, I don’t think a single person in the A.I. labs I’ve ever talked to thinks prompt crafting for most people is going to be a vital skill, because the A.I. will pick up on the intent of what you want much better.
Concerns: Superpersuasion
[I] don’t worry so much about prompt crafting in the long term … because I think that they [AI] will work on intent. And there’s a lot of evidence that they’re good at guessing intent.
There’s a reason why some of the worry you hear out of the labs is about superhuman levels of manipulation.… Like, I think we’re deeply trickable in this way. And A.I. is really good at figuring out what we want without us being explicit.
[Ezra Klein: … one thing we know from inside these A.I. shops is these A.I.s already are, but certainly will be, really super persuasive. And so if the later iterations of the A.I. companions are tuned on the margin to try to encourage you to be also out in the real world, that’s going to matter, versus whether they have a business model that all they want is for you to spend a maximum amount of time talking to your A.I. companion, whether you ever have a friend who is flesh and blood be damned.]
Regulation
[There’s] going to need to be some social decisions being made about how to use these things well as a society that are going to have to go beyond just the legal piece, or companies voluntarily complying.
[Ezra Klein: … if you want to make money off of American kids, we can regulate you.… if you want to be having credit card payments processed by a major processor, then you have to follow the rules…. if you’re making a lot of money, then you have relationships we can regulate.]