As a Machine Learning Engineer at Yabble, I’ve had the privilege of working with Large Language Models (LLMs) for the past two years, even before the explosion of interest sparked by ChatGPT. My journey began with earlier versions of GPT models, and these foundational experiences have been instrumental in refining my approach to creating effective prompts.
LLMs are incredibly powerful tools, capable of transforming the way we interact with technology and harness data. However, like any tool, they require skill and precision to achieve the best results. Through trial, error, and continuous learning, I’ve distilled a few key tips for crafting prompts that yield optimal outcomes.
LLMs, despite their sophistication, are not mind readers. The more clearly you define your objective, the more likely you are to get a useful response. Use specific language and avoid ambiguity. A well-crafted prompt minimizes misinterpretation and sets the LLM up for success.
Example:
Instead of asking, “Tell me about climate change,” try “Summarize the main causes and effects of climate change in a paragraph.”
One of the most common mistakes in LLM prompting is assuming that the model will know the best way to structure its response. Whether you need a list, a paragraph, or even a bulleted summary, it’s crucial to specify the format you’re looking for.
Example:
If you need a list, say,
“Provide a bulleted list of the main causes of climate change” or “Generate a JSON array listing the primary drivers of climate change. Example format: [“cause1”, “cause2”, ... ]”
rather than a vague prompt that might return a paragraph instead.
LLMs thrive on context, but too much information can overwhelm them. If you need to provide a substantial amount of background, consider summarizing the key points first. This ensures that the LLM focuses on the most relevant details without getting bogged down.
Tip: Instead of feeding the model an entire article, summarize the article into a few sentences that capture the essence of the content you want to include as context.
When you need very specific results, providing examples can be incredibly helpful. Examples help the LLM grasp the desired style, format, or tone, leading to more accurate and relevant outputs.
Example:
- Vague prompt: "Write a product description for a new smartphone."
- Improved Prompt: "Write a product description for a new smartphone, using a persuasive and enthusiastic tone, similar to this example: 'Experience the future in your hands with the revolutionary XYZ Phone. Its stunning display, lightning-fast processor, and groundbreaking camera system will redefine your mobile experience."
LLMs generally perform better when handling smaller, simpler tasks. For complex requests, it’s often more effective to break them down into individual components. Additionally, implementing Chain-of-Thought (CoT) prompting can guide the LLM through complex reasoning processes step-by-step.
Tip: Instead of asking for a marketing email, blog post, and social media captions in one go, start with, “Write a marketing email promoting our new product launch.” Then, follow up with separate prompts for the blog post and captions.
Example:
Complex Prompt: "Create a marketing campaign for a new eco-friendly cleaning product, including a slogan, social media posts, and a press release."
Improved Prompts:
- "Generate a catchy slogan for a new eco-friendly cleaning product."
- AI generates a slogan
- "Write three engaging social media posts promoting a new eco-friendly cleaning product."
- AI now generates your social media posts, using the slogan as context
- "Draft a press release announcing the launch of a new eco-friendly cleaning product."
- AI now generates your press release, using all prior information as context
If the LLM produces a response that’s unexpected or unclear, don’t hesitate to ask for an explanation. This can provide valuable insights into the model’s reasoning and sometimes you can get more accurate results.
Example:
Vague Prompt: “Solve the question and give me the answer without any comment. Q. A train travels at 38 mph for 2 hours and 25mins. How far did it travel?”
- gpt-4o gives us the wrong answer, 92.17 miles
Improved Prompt: “Solve the question and provide the solution. I need a step-by-step explanation and the final answer. Q. A train travels at 38 mph for 2 hours and 25mins. How far did it travel?”
- gpt-4o gives us the correct answer, 91.83 miles along with a clear explanation.
Effective LLM prompting is an iterative process. It’s rare to get the perfect output on the first try, so don’t be discouraged if your initial prompts don’t hit the mark. Experiment, refine, and learn from each interaction. With patience and practice, you’ll find yourself mastering the art of LLM prompting.
At Yabble, we’re passionate about ensuring that we utilise the full potential of LLMs and other AI technologies, especially within our innovative tools like Yabble’s synthetic data solution, Virtual Audiences, and our AI research assistant, Gen. Whether you’re new to LLMs or a seasoned pro, these prompting tips are designed to help you harness the full power of these AI for insights tools. Happy prompting!
This blog post is brought to you by Peter Hwang, Machine Learning Engineer at Yabble:
As a data scientist/machine learning engineer with over 6 years of experience, Peter has worked on a diverse array of projects, from pricing and inventory optimization to customer analysis and digital marketing. Currently, he is deeply engaged in the exciting field of Natural Language Processing (NLP), where he specializes in building applications based on Large Language Models (LLMs). Peter's work involves leveraging advanced techniques such as RAG and the development of domain-specific LLMs to address complex challenges, ensuring optimal balance between cost, speed, explainability, and value.