In today’s column, I showcase a prompt engineering technique that I refer to as conversational-amplified prompt engineering (CAPE). Some also use the shorter moniker of conversational prompt engineering (CPE) though that is a bit confusing since it has a multitude of other meanings. In any case, the underlying concept is that you can substantively improve your prompting by carrying on a conversation with generative AI and large language models (LLMs), such that the AI pattern-matches and trains to how you write your prompts. This provides numerous benefits.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
Importance Of Prompt Engineering
Readers might recall that I previously posted an in-depth depiction of over fifty prompt engineering techniques and methods, see the link here. There is no doubt that undertaking proficient prompt engineering is a learned skill. Top-notch prompt engineers know that leveraging various proven prompting techniques is the best way to get the most out of generative AI and likely garner you some good bucks for your skilled deeds.
The usual approach to prompting is a one-and-done method. You enter a prompt, wait to see what response you get, and then either move on or possibly craft another prompt to clarify or further expand on the prompt that you just entered. It is a rinse and repeat kind of cycle.
There is an interesting twist that few seem to realize.
The trick is this. Generative AI is highly capable of identifying patterns in how humans write. Indeed, the initial data training for LLMs is done by widely scanning the Internet for human composed essays, narratives, stories, poems, and the like. Via computational and mathematical pattern-matching, AI figures out the underlying patterns associated with human compositions. That’s how AI is so seemingly fluent when conversing with the user.
We can use that same pattern-matching facility to catch onto how someone tends to write their prompts. It is readily feasible to essentially data-train the generative AI on your prompting style. Henceforth, the AI will be more likely to interpret your prompts as per what you have in mind and not wander afield of what you intend.
The twist then is that you can train generative AI on the nature of your prompting and garner handy benefits accordingly when performing ongoing prompt engineering. Easy-peasy.
Boom, drop the mic.
Conversational-Amplified Prompt Engineering
I refer to this specialized technique as conversational-amplified prompt engineering (CAPE). The naming is logical and sensible. You are going to engage in a focused conversation with generative AI and do so to have the AI pattern-match on your prompting style. The emphasis during that focused conversation will be how you write prompts. It isn’t especially crucial as to what you are prompting about, and more so the nature of how you compose your prompts.
In that sense, the prompt engineering that you subsequently undertake will be amplified because of that training. The AI will allow you to shortcut your prompts and still get the gist of your intentions. The AI will pretty much get the drift of your prompts due to having already examined and patterned them.
Here are some outstanding benefits due to the CAPE technique:
- (a) Generative AI will be able to undertake personalized prompt interpretations.
- (b) Reduces the overall prompt engineering effort required by the user.
- (c) Increases efficiency since you don’t have to be laborious in your prompts.
- (d) Enhances prompting including inclusion of popular prompt engineering techniques.
- (e) Promotes adaptation to domain-specific language or instructions.
- (f) Saves on cost because of less miscast prompt clarifications.
That last point is one that I particularly point out during my classes on prompt engineering. One of the biggest aims of CAPE is that the AI will home in on what you want the AI to do. The beauty is this. You perhaps at times find yourself redoing a prompt several times, vigorously trying to get the AI to get the substance of what you are asking. Each of those tries will cost you in terms of time and a transaction fee by the AI system if you are paying for your AI usage.
By training generative AI on your prompting style, there is a lessened chance of multiple trials and a heightened chance that the AI will garner what your prompt involves straightaway.
Example #1 Of Simple CAPE
I will provide a few quick examples to illustrate how conversational-amplified prompt engineering works.
I logged into generative AI to craft these examples. In this instance, I opted to use ChatGPT since it is widely popular and achieves about 300 million weekly active users. The same aspects shown here can be undertaken in any of the other major generative AI such as Claude, Gemini, CoPilot, Llama, etc.
For this first example, I ask the generative AI to summarize an article for me. After the AI does so, I will provide feedback so that the AI then can pattern on my prompting style.
- My entered prompt: “Summarize this article.”
- Generative AI response: “Here’s a summary. Let me know if you want a different focus.”
- My entered prompt: “Looks good, but I prefer bullet points over paragraphs.”
- Generative AI response: “Got it! Here’s the summary in bullet points.”
- My entered prompt: “Thanks, I want you to remember that when I ask for summaries, I normally intend that bullet points are to be used rather than paragraphs.”
- Generative AI response: “I will remember that preference and abide by it accordingly.”
In this instance, the AI initially defaulted to giving the summary in a narrative or paragraph form. That’s fine but not what I usually want. I had the generative AI redo the summary into my preference of bullet points.
In addition, and this is crucial, I explicitly told the AI that henceforth when I ask for summaries, I want them shown in bullet form. The AI might not have otherwise detected that this is a discernible pattern of what I prefer. You will likely realize that this now becomes a handy-dandy shortcut for me. Every time that I ask for a summary, I won’t have to indicate in my prompt that I want bullet points, nor will I need to do a follow-up prompt to get bullet points. The AI will merely always proceed to give me summaries in bullet point form.
Happy face.
When I bring up this example in class, there is usually a smarmy person that objects and says that I am trapping myself into an infinite world of bullet points. The AI will seemingly never do anything for me other than bullet points. Though that might on the surface seem to be the case, in practice it isn’t so. All you need to do is state in a subsequent prompt that you don’t want the bullet points on a particular occasion. Furthermore, if you decide that you no longer want an always-on bullet points summarization, just tell the AI that the pattern is no longer to be used.
Problem solved.
Example #2 Of Simple CAPE
I went ahead and did some training interactions with generative AI to get it used to what I say in my prompts. The number of those interactions can be relatively modest, perhaps a dozen or so, as long as you explicitly direct the AI to be watchful during those interactions. You will need to get the AI to identify what patterns it has detected and then refine or correct them accordingly.
To give you an example of how this pays off, I asked the generative AI about my car and the response was nicely crafted by the AI.
Take a look.
- My entered prompt: “What should I do about my car?”
- Generative AI response: “You have previously noted that you have two cars. One of them is a new car that you just recently bought. The other is a dozen years old and has been having frequent car troubles. When you mention what you should do about your car, should I generally assume that you are referring to the older car since it is the one that has troubles?”
- My entered prompt: “Yes, you can generally assume that if I am asking questions about my car that I am likely referring to my older car, thanks.”
- Generative AI response: “Will do.”
You can see that the generative AI had picked up on the fact that I have two cars, one of which is older and likely to be the one that has troubles. My prompt in this example was ambiguous about which of the two cars I was referring to. The AI was able to clarify and guess which car I had in mind, doing so aptly because of patterning on my earlier prompts.
Example #3 Of Simple CAPE
You might recall that in my first example, I mentioned that you can readily override a pattern that the AI has about your prompting style.
This example illustrates that facet.
- My entered prompt: “Summarize this article about AI and quantum theory.”
- Generative AI response: “I will summarize the article and do so in a series of bullet points. That style is what you usually seem to prefer in terms of my preparing summaries.”
- My entered prompt: “In this case, I want the summary to be very detailed and in-depth. Thus, this is an exception to my usual preference.”
- Generative AI response: “OK, I will prepare a detailed and in-depth summary this time. I will continue with the bullet points style thereafter.”
- My entered prompt: “Yes, that’s good, thanks, please proceed.”
Voila, it is as easy as falling off a log to recast the prompting style that the AI has patterned.
Advanced Aspects About CAPE
When you do the training regarding your prompting style, there are three main practices that I highly recommend:
- (1) Big picture. Converse to establish how you prompt overall.
- (2) Particular focus. Converse to establish specific prompting situations.
- (3) Feedback oriented. Provide feedback focused on your prompting preferences.
The first angle to undertake is to have the generative AI data-train on your overall prompting style. Just do a wide array of prompts as though you are playing around with AI. The wider the types of prompts and questions, the better. Use open-ended questions. Use closed-ended questions that derive a precise answer. And so on.
The second angle is to converse with the AI on any specific considerations about your prompts. For example, a healthcare professional might tend to use generative AI for healthcare specific questions and problem solving. In that case, it would be useful to carry out a series of prompts with AI that are immersed in the healthcare domain. This allows the AI to pattern on those kinds of customized prompts.
The third angle is to provide feedback to the AI as it is attempting to pattern on your prompts. Do not assume that the AI is patterning in the right way. It might not be. Make sure to directly ask what patterns it is detecting. You can then give further guidance to make sure the patterns are apt.
Research On The CAPE Topic
Conversational-amplified prompt engineering is the naming convention that I use for this innovative advanced technique of prompt engineering. There is AI research about prompt engineering that fits to this depiction, though the phrasing is often stated as conversational prompt engineering (CPE) instead of saying CAPE. The heartburn I have with CPE as a moniker is that this means different things to different AI insiders. To help clarify, I use CAPE as a naming convention.
One popular avenue of research on this approach entails using a dedicated front-end tool that aids in patterning on a prompting style. If you are interested in that kind of research, a recent paper entitled “Conversational Prompt Engineering” by Liat Ein-Dor, Orith Toledo-Ronen, Artem Spector, Shai Gretz, Lena Dankin, Alon Halfon, Yoav Katz, and Noam Slonim, arXiv, August 8, 2024, makes these salient points (excerpts):
- “Prompts are how humans communicate with LLMs. Informative prompts are essential for guiding LLMs to produce the desired output.”
- “However, prompt engineering is often tedious and time-consuming, requiring significant expertise, limiting its widespread use.”
- “We propose Conversational Prompt Engineering (CPE), a user-friendly tool that helps users create personalized prompts for their specific tasks. CPE uses a chat model to briefly interact with users, helping them articulate their output preferences and integrating these into the prompt.”
- “The process includes two main stages: first, the model uses user-provided unlabeled data to generate data-driven questions and utilize user responses to shape the initial instruction. Then, the model shares the outputs generated by the instruction and uses user feedback to further refine the instruction and the outputs.”
Whether you use a specialized tool or merely rely on generative AI pattern-matching generically, the underlying concepts and practices are roughly the same.
When CAPE Comes To The Rescue
Let’s cover the when and where of conversational-amplified prompt engineering.
Users that infrequently use generative AI are not likely to be candidates for using a conversational-amplified prompt engineering technique since they rarely use AI and probably aren’t pushing boundaries on their prompting. Thus, this technique is not for everyone and nor is it a silver bullet.
On the other hand, anyone frequently using generative AI, and especially stretching the boundaries on your prompts, they would greatly benefit from this approach. It doesn’t take much time and effort to do the data-training. Plus, once you’ve done the upfront heavy lifting, things are good to go.
A final thought for now.
They say that practice makes perfect. It is a commonly noted piece of sage advice. I suppose the meaning is clearcut and reasonable.
Vince Lombardi famously said this in contrast: “Practice does not make perfect. Only perfect practice makes perfect.” Shocker. Contrarian. An intriguing alternative perspective.
Bottom-line is this. Go ahead and proceed with conversational-amplified prompt engineering if that’s going to be to your advantage when using generative AI. Remember that it’s not just the practice alone that will work the magic, it is also shall-we-say perfectly practicing that really counts.
Sage advice, indeed.