The launch of ChatGPT redefined what's possible for all knowledge workers. Since the advent of transformative Large Language Models (LLMs) like GPT-4, we have seen an extraordinary potential to enhance productivity, creativity, and decision-making. However, many remain unaware of how to tap into these capabilities, leaving a vast portion of AI's potential unexploited. The key lies in the art of "prompt engineering"— a skill that's becoming ever more important for all knowledge workers. As a long-time AI enthusiast, my journey took a sudden turn with the launch of ChatGPT, which has since taken me across many aspects of generative AI. We were early to integrate GPT into our product at Delibr, letting Product Managers co-create their PRDs with an AI Copilot, and more. Since then, I was pulled into holding dozens of talks, coaching sessions, and strategy workshops on generative AI, and I've held several courses on ChatGPT and GPT-4. These experiences have only reaffirmed my belief in the enormous benefits that come from learning to use AI models like ChatGPT proficiently. In this blog post, I’ll introduce a framework for prompt engineering that I developed across many coaching sessions, and that I have found truly helps professionals level-up their productivity.
What is Prompt Engineering?
In the rapidly evolving landscape, one skill has begun to set the pioneers apart: prompt engineering. Prompt engineering is the craft of designing precise instructions that guide LLMs like GPT-4 to generate the desired responses. It's a language of sorts, a lingua franca between humans and AI that enables us to tap into the incredible processing power and pools of training data of these models and turn it into valuable insights. At its core, prompt engineering revolves around the ability to ask the right questions in the right way. It is becoming an ever more important skill for anyone aiming to thrive in the era of generative AI. As the now famous meme goes: "An AI won't replace you, a person using an AI will." By understanding prompt engineering, we can become more effective communicators with AI, making the most of the capabilities it offers. This ensures that we are not the ones being replaced but are instead the ones propelling our careers and our organizations forward with the help of AI. The rise of AI-enhanced productivity is no longer a distant reality. It's here, and prompt engineering is the key to unlocking its full potential.
Framework for Mastering Prompt Engineering
The path to mastering prompt engineering involves understanding and building on several layers of knowledge. As I developed this framework across many coaching sessions, I took inspiration from one of my favourite blog articles “How to Become an Expert: A Roadmap” https://litemind.com/expert-roadmap/.
The framework breaks down the journey into three distinct levels: Applying Existing Prompts, Crafting Own Prompts, Mastering Prompt Engineering.
Level 1: Applying Existing Prompts
At its simplest level, use of ChatGPT begins with looking at prompt examples and templates, and trying them out yourself. It is easy to find examples on the internet, just Google for “chatgpt prompt examples for X”, or “best chatgpt prompt templates for Y”, and you will get a lot of hits. Familiarity with examples and templates gives you a good starting point and helps your brain build pattern recognition. Whenever you are learning something, it is good if you can find concrete examples to learn from. However, the examples you find will likely not fit exactly what you are looking for. To develop further, you will need to go beyond just copy-pasting what others have done.
Level 2: Crafting Own Prompts
As we delve deeper, the second level involves recognizing various prompt types and customizing your own prompts.
Learning the different types of prompts helps you open up your mind to the possibilities and lets you find more use cases, as well as picking the right type of approach for different use cases. The three main categories of prompts are content-oriented, focusing on generating some specific content; interaction-oriented, designed to stimulate a conversational or interactive response; and situation-oriented, reflecting a certain decision, scenario or context. Familiarity with these diverse types, their subtypes, and their appropriate use-cases gives you a better starting point for your AI interactions.
Learning to craft your own custom prompts gives you the power to ask exactly what you want from the AI, ensuring a fit-for-purpose response. By formulating your instruction, background information, and desired goal, you can derive more relevant responses. Thoughtfully crafted custom prompts also weave in other elements that subtly steer the AI's output, such as adopting a specific role or writing style, and defining the output format.
At this level of prompt engineering the benefits start to become self-evident, as there are so many use cases where you can really get a lot of help. However, you still have a long way to go.
Level 3: Mastering Prompt Engineering
The third level delves into more sophisticated prompt engineering skills that, taken together, bring the practice of prompt engineering to a whole new level. It involves metacognition, expository writing, adopting an iterative approach, and using advanced prompting techniques.
Metacognition, or "thinking about thinking," is perhaps the most important concept for mastering prompt engineering. For many, it is the most appreciated part of my courses. Yet, it is also the most elusive, the one most people struggle with taking from theory to practice without in-person coaching. It involves two aspects, stepping out of your own thought processes and conceptualizing the thought process of the AI. Thinking about your own thinking, it helps asking questions like: "What am I trying to do? What have I concluded, and what is left?”. Thinking about the thinking of the AI, it helps asking the AI questions like “What would a good prompt be for X? What do you need to know to help me with Y?" Your metacognition skill is like a muscle, the more you practice it, the better it gets. Once people get the hang of metacognition for prompt engineering, they typically both find many more relevant use cases, and get much better results.
In the realm of prompt engineering, the adage "garbage in, garbage out" holds remarkably true. Clear, concise, and comprehensive prompts lead to superior results. This is the essence of the expository writing skill. Expository writing involves presenting facts and evidence for reader understanding, aiming to inform, explain, or describe a topic. It eschews persuasion and entertainment, focusing solely on clarity and precision. By adopting expository writing principles in prompt engineering, we can give AI models the detailed, unambiguous directions they need to deliver highly relevant and useful outputs.
Like any creative or problem-solving process, working with AI models also benefits from an iterative approach. An initial prompt may not always yield the desired outcome, but it can provide a solid starting point. By analyzing the AI's response and iteratively refining the prompt, working from a higher-level and then into the details, we can hone in on the optimal result. Having spent 6 years as a management consultant at McKinsey, I am struck by how well the AI responds to the structured and iterative problem solving approach that was used there, and how replicating that approach when interacting with the AI yields better results faster. Another especially useful iterative approach is "prompt chaining," where each prompt builds upon the previous one, incorporating its context and response. This technique results in a richer, more nuanced interaction with the AI model, making it capable of addressing complex problems by tackling one aspect at a time, building up to the final result.
The final part of the framework revolves around mastering various prompting techniques to further improve the AI's output quality. Even though GPT-4 is really smart, it struggles with some tasks. For those tasks, special techniques can be used to make the model smarter. Techniques such as 'few shot prompting,' which leverages multiple example prompts to steer the AI’s response, or 'chain of thought reasoning,' that directs the AI to lay out a logical progression of ideas, can dramatically influence quality of the output. Similarly, 'reflection prompts' invite the AI to self-evaluate its understanding or solution, providing an additional layer of refinement to the output. Knowing techniques like these mean that whenever you encounter a task that is tough to handle for your AI, you can give it a helping hand, literally making it smarter.
Embarking on the path of learning prompt engineering, from deciphering various prompt types to tailoring personalized prompts and mastering advanced techniques, we can become much more effective at engaging with AI models like ChatGPT. This framework isn't a rigid rulebook but a roadmap for growth, to be complemented with feedback and coaching.
The real beauty of this journey lies not merely in obtaining a new set of skills, but in the transformative learning that it triggers. In the course of learning to instruct an AI model, you get an invaluable opportunity to delve into your own thinking and communication patterns. Yet, this kind of self-reflection often reaches its full potential with the aid of insightful coaching that helps spotlight blind spots and push beyond limits - taking you from enthusiastic learner into bona fide master in the craft of prompt engineering.
Course Impact and Feedback
Over the past few months, I've had the privilege of teaching prompt engineering for amazing groups of enthusiastic professionals in my courses and coaching sessions. The participants' engagement and passion for learning have truly been a driving force in improving the course content and delivery with each session. Participants consistently reported significant impact from the course. From enhancing their productivity and efficiency to opening new avenues of problem-solving, mastering prompt engineering has brought tangible benefits to their professional lives. One participant reflected: "I liked the theory, but getting to play around with these concepts and get coaching on how to do it, that is what really gave me the major aha-experiences!" I was humble and happy to receive incredibly positive feedback, with a Net Promoter Score (NPS) of over 70 for the inaugural course, climbing to NPS over 80 for the subsequent one. This enthusiastic response not only speaks to the value of the course but also to the rising interest in and need for prompt engineering skills in today's AI-augmented landscape.
The journey towards mastering prompt engineering continues, and I'm excited to bring more opportunities for learning with upcoming courses.
For those who are working in product management and looking to sharpen their AI skills, I am co-hosting a course with Reza Farhang at Crisp. You can register for this course titled "ChatGPT & GPT-4 for Product People" at this link - I am also thrilled for my upcoming collaboration with Pragmatic Institute for courses for Product Managers after the summer.
For those in HR roles, Frida Mangen and I have crafted a unique course to help HR professionals harness the power of AI in their work. More information on this can be found at this link.
The goal is to cater to a wide range of roles and fields, recognizing the universal applicability and impact of AI skills. I have already worked on Generative AI for a wide range of fields: SaaS, Healthcare, Sales, Game Design, Journalism, Fashion, just to mention a few. I find that for every new field, I learn things applicable to the previous ones, transfer learning really is a thing. If you are interested in a custom course tailored to your specific needs, please do not hesitate to get in touch.
Build your Opportunity Solution Tree in Delibr!
Let us help you!