1827 Marketing

View Original

A B2B Marketer's Guide to Prompt Engineering

See this content in the original post

Prompt engineering—the art of designing effective prompts to elicit desired outputs from large language models (LLMs)—has emerged as a critical skill in the era of conversational AI. As leading platforms like OpenAI, Anthropic, Google, and Meta continue to advance their LLMs, a set of common best practices has crystallised, offering valuable insights for users seeking to optimise their interactions with these powerful tools.

As a B2B content marketer, leveraging the power of large language models (LLMs) can greatly enhance your content creation process and improve the quality of your outputs. By applying prompt engineering best practices across leading platforms like OpenAI, Anthropic, Google, and Meta, you can generate compelling, targeted content that resonates with your audience and drives business results. Let's explore how these best practices can be applied to various aspects of B2B content marketing.

Common Best Practices Across Platforms

Clarity and Specificity

Across all platforms, the importance of crafting clear, specific prompts cannot be overstated. LLMs are not mind readers; they rely on the information provided in the prompt to generate useful outputs. By including relevant details, context, and examples in your prompts, you can guide the model towards producing more accurate responses. Vague or ambiguous prompts often lead to generic or irrelevant outputs.

Example: When drafting SEO-optimised content, provide the LLM with specific keywords, desired word count, target audience, and key points to cover.

Prompt:
"Write a detailed outline for a blog post targeting IT decision-makers about the benefits of cloud computing for businesses.

Include the keywords 'cloud migration,' 'scalability,' and 'cost efficiency.'

Cover the following points: 1) Increased flexibility and agility, 2) Reduced infrastructure costs, 3) Enhanced security and data protection."

The end result will almost certainly be somewhat generic, but with the additional help of professional content copywriters it can be made more relevant, distinctive and specific to your company’s offer.

Structured Prompts

Structuring prompts using techniques like delimiters (e.g., triple backticks, XML tags) and clear sectioning can help LLMs better understand the different components of your input. This is particularly useful when providing examples, instructions, or context alongside your main query. By clearly demarcating these elements, you enable the model to more effectively parse and use the information you've provided.

Example: Use XML tags to clearly distinguish between the article outline and the target audience when generating content ideas.

Prompt:
"<outline>1. Introduction to AI in marketing 2. Benefits of AI for B2B marketers 3. Real-world examples of AI-powered marketing 4. Implementing AI in your marketing strategy 5. Future trends in AI marketing</outline>

<audience>B2B marketing managers and executives</audience>

Generate 3 content ideas based on the provided outline and target audience."

Role-Playing and PersonaliSation

Assigning a specific role or persona to the model can yield more tailored, context-aware responses. By asking the model to adopt the perspective of a domain expert, a character, or even a specific individual, you can elicit responses that are more aligned with your desired tone, style, and knowledge base. This technique is especially powerful when combined with clear instructions and relevant examples.

Example: Have the LLM adopt the persona of a successful B2B content marketer when brainstorming article ideas.

Prompt:
"As an experienced B2B content marketer, brainstorm 5 engaging article ideas that would resonate with sales professionals in the software industry. Focus on topics that address common challenges and provide actionable insights."

Few-Shot Learning

Providing a few examples of the desired input-output pattern can significantly improve the model's ability to generate relevant, properly formatted responses. Known as few-shot learning, this technique leverages the model's ability to recognise and replicate patterns from a small number of examples. When using few-shot learning, be sure to provide diverse, representative examples and maintain consistent formatting across all examples.

Example: Provide examples of well-written PPC ad copy to guide the LLM in generating ideas for new ad copy.

Prompt:
"Here are 3 examples of effective PPC ads for a CRM software:

  1. Streamline your sales process | Boost productivity with CRM | Start your free trial today

  2. Close more deals with CRM | Manage your pipeline efficiently | Get started now

  3. Maximize your sales potential | Powerful CRM features | Request a demo

    Generate 3 more PPC ads following a similar format and tone, targeting B2B decision-makers in the financial industry."

Iterative Refinement

Prompt engineering is an iterative process. Rarely will you arrive at the perfect prompt on your first attempt. By analysing the model's outputs and identifying areas for improvement, you can progressively refine your prompts to achieve better results. This may involve adjusting your instructions, providing more relevant examples, or experimenting with different prompt structures and phrasings.

Example: Continuously review and refine brand proposition generated by the LLM to ensure alignment with company values and goals.

Prompt:
"Here is our current brand proposition statement: [insert statement].

Please provide 3 alternative versions that better emphasise our commitment to innovation and customer success. For each version, explain how it aligns with our company values and goals."

Leveraging External Knowledge

While LLMs are trained on vast amounts of data, they may not always have access to the most up-to-date or domain-specific information. By incorporating relevant external knowledge into your prompts (e.g., through embeddings-based retrieval or direct injection), you can enhance the model's ability to generate accurate, informative responses. This is particularly valuable when dealing with niche topics or rapidly evolving domains.

Example: Incorporate industry reports and case studies when generating content ideas to ensure relevance and credibility.

Prompt:
"Using the insights from the attached industry report on B2B marketing trends and the case study on successful ABM campaigns, generate 5 content ideas that leverage this external knowledge. Ensure each idea is relevant to our target audience of B2B marketers and includes supporting data from the provided sources."

Ethical Considerations

As with any powerful technology, prompt engineering comes with a responsibility to use LLMs ethically and responsibly. This means being mindful of potential biases, avoiding the generation of harmful or misleading content, and ensuring that the outputs align with your intended use case and values. By carefully crafting your prompts and providing appropriate guidance and constraints, you can help steer the model towards generating more ethical, trustworthy responses.

Example: Provide guidelines to the LLM to ensure generated content is factually accurate, unbiased, and aligns with your brand's values.

Prompt:
"When generating content ideas and article outlines, ensure that all information is factually accurate and supported by credible sources. Avoid expressing opinions on sensitive topics and maintain an unbiased tone. All content should align with our brand's values of transparency, integrity, and respect for our audience."

Model-Specific Optimisation

While the fundamental principles of prompt engineering are largely consistent across platforms, each LLM has its own unique characteristics and capabilities. To get the most out of a specific model, it's essential to familiarise yourself with its strengths, weaknesses, and idiosyncrasies. This may involve experimenting with different prompt formats. If you are working with these platforms’ APIs or playgrounds, you can experiment with adjusting model parameters (e.g., temperature, top-k, top-p), or leveraging platform-specific features and extensions.

1 - Claude from Anthropic

XML Tags

Claude is particularly familiar with prompts structured using XML tags, as it was exposed to this format during training. By enclosing key components of your prompt (e.g., instructions, examples, input data) in XML tags, you can help Claude better understand the context and generate more accurate outputs.

Example: Use XML tags to separate the target audience, desired tone, and key points when generating a blog post outline.

Prompt:
"<audience>B2B sales managers</audience>
<tone>Informative and engaging</tone>
<keypoints>1. Importance of lead nurturing 2. Strategies for effective lead nurturing 3. Measuring the success of lead nurturing campaigns</keypoints>
Generate a blog post outline based on the provided target audience, desired tone, and key points."

Long Context Windows

Claude boasts an extended context window (100,000 tokens for Claude v1.3), enabling it to handle complex tasks that require processing large amounts of information. To make the most of this capability, consider combining multiple relevant pieces of information into a single prompt, using XML tags to maintain a clear structure.

Example: Provide Claude with a detailed company background and target audience information to generate a comprehensive brand narrative.

Prompt:
"[Insert detailed company background and target audience information] Based on the provided context, create a 500-word brand narrative that highlights our unique value proposition, mission, and commitment to our target audience. Ensure the narrative is consistent with our company's history and values."

These approaches can be really helpful in getting the AI to produce fresh perspectives or identify ideas that may not occur to you, but the end result is usually fairly generic. We blend the results of computational methods and AI with talented human copywriters to create more effective content.

Instruction Following

Claude is adept at following multi-step instructions. By breaking down complex tasks into a series of clear, distinct steps, you can guide Claude towards producing more accurate, well-structured outputs. Use numbered lists or bullet points to present instructions in a logical, easy-to-follow manner.

Example: Break down the process of creating a content calendar into clear, step-by-step instructions for Claude to follow.

Prompt:
"To create a 3-month content calendar for our B2B blog, follow these steps:

  1. Identify 12 topic ideas (4 per month) based on our target audience's interests and pain points.

  2. For each topic, generate a blog post title and a brief outline with 3-5 key points.

  3. Assign a target publication date and a target word count (between 1000-1500 words) for each post.

  4. Prioritise the topics based on relevance and timeliness.

  5. Present the final content calendar in a table format with columns for Topic, Title, Outline, Target Date, and Word Count."

This method can be a great way to ideate and get over the sense of being daunted by a blank page. For a more strategic approach, you can ask us to go deeper into your brand messaging and the data to develop a full content marketing strategy and plan.

Constitutional AI

Anthropic has trained Claude using constitutional AI principles, which aim to ensure the model behaves in accordance with certain rules and values. When crafting prompts for Claude, consider how you can align your instructions and examples with these principles to generate more ethical, trustworthy responses.

Example: Align the generated content with your company's values and ethical guidelines.

Prompt:
"When generating content ideas for our B2B audience, ensure that all proposals adhere to our company's values of integrity, transparency, and customer-centricity. Avoid promoting practices that could be seen as deceptive or manipulative, and prioritise the needs and interests of our target audience above all else."

2 - ChatGPT from OpenAI

OpenAI's GPT models, particularly GPT-3.5 and GPT-4, have set the standard for prompt engineering best practices:

Role-Playing

OpenAI's models excel at adopting different personas and writing styles based on the prompt. By providing a clear, detailed description of the desired role or persona, you can elicit responses that are tailored to your specific use case. Use the system message to set the overall tone and context for the conversation.

Example: Have ChatGPT take on the role of a seasoned B2B copywriter when generating ad copy.

Prompt:
"As an experienced B2B copywriter, create 3 compelling ad headlines and brief descriptions for our new marketing automation software. Focus on the benefits of using our software, such as increased efficiency, better lead management, and higher conversion rates. The ads should target marketing managers and executives in mid-sized to large enterprises."

However, it's important to recognise that LLMs are not a replacement for the expertise and creativity of human professionals. Even when provided with a clear, detailed description of the desired role, the model's outputs will lack the depth of knowledge, real-world experience, and creative problem-solving abilities that a skilled human brings to the task.

The resulting ad copy will provide a better starting point than without a role assigned, but will likely require refinement and adaptation by a human copywriter to truly resonate with the target audience.

Few-Shot Learning

GPT models are highly capable of learning from a small number of examples. When providing examples in your prompts, aim for diversity and representativeness. Use a consistent format across all examples to help the model identify and replicate the desired pattern.

Example: Provide ChatGPT with examples of effective B2B email subject lines to guide the generation of new ones.

Prompt:
"Here are 3 examples of high-performing B2B email subject lines:

  1. [Webinar] Boost your sales productivity with AI-powered CRM

  2. 5 proven strategies to maximise your marketing ROI

  3. Exclusive invite: Join our roundtable discussion on the future of B2B marketing

    Generate 5 more email subject lines following a similar format and style, focusing on topics relevant to B2B marketers in the technology industry."

Chain-of-Thought Prompting

For complex reasoning tasks, consider using chain-of-thought prompting, which encourages the model to break down its reasoning process into a series of intermediate steps. This can lead to more accurate, transparent outputs, as it allows you to follow the model's "thought process" and identify potential errors or biases.

Example: Ask ChatGPT to explain its reasoning when evaluating the effectiveness of different content distribution channels.

Prompt:
"Evaluate the effectiveness of the following content distribution channels for B2B marketers: 1) LinkedIn, 2) Twitter, 3) Industry-specific forums, 4) Email newsletters.

For each channel, provide a step-by-step explanation of why it is or isn't effective, considering factors such as audience targeting, engagement potential, and measurable ROI."

Function Calling

OpenAI's Chat Completions API supports function calling, enabling models to generate function arguments based on predefined schemas that you set up. A function lets you connect your own company’s systems or other APIs to the platform so that it can provide extra information as needed. This feature is particularly useful for tasks that require structured inputs or interactions with external APIs.

Example: Use function calling to generate and analyze SEO-related data for content optimization.

Prompt:
"Analyse the provided blog post text using the 'get_seo_score' and 'get_keyword_density' functions. Based on the results, provide 3 actionable suggestions for improving the post's SEO, such as incorporating missing keywords, optimising header tags, or improving readability."

See this content in the original post

3 - LLaMA from Meta

Meta's LLaMA model and its derivatives (e.g., Alpaca) have sparked a new wave of interest in open-source LLMs and prompt engineering techniques:

Prompt Tuning

Meta's research has highlighted the effectiveness of prompt tuning, a technique that involves fine-tuning a small set of soft prompts while keeping the underlying model parameters fixed. By carefully designing and optimising these soft prompts, you can adapt LLMs to specific tasks without the need for full fine-tuning.

By fine-tuning the prompt with relevant keywords, instructions, and tone guidelines, you can adapt the model to generate content that resonates with your target audience and aligns with your content marketing goals.

The process of prompt tuning typically involves the following steps:

  1. Identify the specific task or domain you want to adapt the model for (e.g., generating ideas for industry-specific thought leadership content).

  2. Design a set of soft prompts that capture the essential elements of the task, such as relevant keywords, instructions, and desired output format.

  3. Fine-tune the soft prompts using a small dataset or a few examples that demonstrate the desired output. This process optimises the prompts to guide the model towards generating content that aligns with your requirements.

  4. Evaluate the model's performance using the fine-tuned prompts and iterate on the prompt design if necessary.

By leveraging prompt tuning, you can efficiently adapt LLMs to various content marketing tasks without the need for extensive fine-tuning of the entire model. This approach allows you to maintain the model's broad knowledge while tailoring its output to your specific use case.

Example: Fine-tune prompts for outlining industry-specific thought leadership content.

Prompt:
"As an expert in [industry], discuss the top 3 trends shaping the future of [specific topic]. The content should provide in-depth insights and actionable advice for B2B professionals looking to stay ahead of the curve. Use a thought-provoking and informative tone, and include real-world examples to support your arguments."

In-Context Learning

LLaMA and its derivatives excel at in-context learning, the ability to learn from examples provided within the prompt itself. When crafting prompts for these models, prioritise the quality and relevance of your examples over quantity. A few well-chosen examples can be more effective than a large number of generic ones.

Example: Provide examples of compelling B2B case studies to guide LLaMA in generating new ones.

Prompt:
"Here are 2 examples of effective B2B case studies:

  1. [Insert example 1]

  2. [Insert example 2]

    Analyse the structure, content, and tone of these case studies. Then, generate a new case study following a similar format, showcasing how our marketing automation software helped a B2B company increase its lead generation by 50% within 6 months."

Instruction Tuning

Recent research from Meta has showcased the potential of instruction tuning, a technique that involves fine-tuning LLMs on a diverse set of instructions and examples. Models like Alpaca, which have undergone instruction tuning, can follow complex, multi-step instructions and generate outputs in a variety of formats and styles.

Example: Provide LLaMA with a list of instructions for creating a B2B whitepaper outline.

Prompt:
"To create an outline for a B2B whitepaper on the role of AI in marketing, follow these steps:

  1. Identify the target audience and their key pain points related to AI in marketing.

  2. Create an introduction that highlights the importance of AI in modern marketing and previews the main points of the whitepaper.

  3. Divide the main body into 3-4 sections, each focusing on a specific aspect of AI in marketing (e.g., personalisation, predictive analytics, content generation).

  4. Under each section, include 3-4 sub points that provide in-depth insights, real-world examples, and actionable advice.

  5. Conclude with a summary of the key takeaways and a call-to-action for readers to implement AI in their marketing strategies.

  6. Provide a list of references and additional resources for further reading."

4 - Google Gemini

Google Gemini is a large language model developed by Google Research, designed to push the boundaries of conversational AI and natural language understanding. Gemini offers several unique features and best practices for prompt engineering.

By leveraging these Gemini-specific best practices and features, users can harness the full potential of Google's cutting-edge language model and generate high-quality, task-specific outputs across a wide range of applications. As Google continues to invest in Gemini's development and release new resources and guidance, staying up-to-date with the latest prompt engineering techniques will be essential for driving innovation and achieving success with the platform.

API Cookbook

Google provides a comprehensive API Cookbook for Gemini, featuring a wide range of examples and best practices for prompt engineering. The cookbook includes detailed guidance on crafting effective prompts, leveraging context and examples, and optimising outputs for specific tasks. By exploring and adapting these examples to your own use case, you can quickly get up to speed with Gemini's capabilities and unlock its full potential.

Intent Classification

Gemini excels at understanding user intent, making it particularly well-suited for tasks like sentiment analysis, entity recognition, and dialogue management. When crafting prompts for Gemini, consider how you can leverage its intent classification capabilities to guide the model towards more accurate, context-aware responses. Use clear, specific language and provide examples that highlight the desired intent or sentiment.

Example: Leverage Gemini's intent classification capabilities to generate content that addresses specific customer pain points.

Prompt:
"Analyse the following customer reviews and feedback to identify the top 3 pain points related to B2B marketing automation: [Insert anonymised customer reviews and feedback]. For each pain point, generate a brief content idea that addresses the issue and provides a solution using our marketing automation software."

Knowledge Retrieval

Gemini can be combined with external knowledge retrieval systems to generate more informative, up-to-date responses. By incorporating relevant information from external sources into your prompts, you can help Gemini provide more accurate, contextually relevant outputs. Google's research has showcased the effectiveness of this approach across a range of tasks, from question answering to content generation.

Example: Integrate external knowledge sources to generate data-driven content.

Prompt:
"Using the attached industry reports and our own research, create an outline for an article on the 'State of B2B Marketing Automation in 2024'. The report should include statistical insights, trends, and best practices for B2B marketers looking to optimise their marketing automation strategies. Ensure all claims are supported by data from the provided sources."

Multilingual Support

Gemini offers extensive support for multilingual prompt engineering, enabling users to generate high-quality outputs in a wide range of languages. When crafting prompts for multilingual use cases, be mindful of language-specific nuances and cultural differences. Provide examples and context that are appropriate for the target language and audience.

Example: Generate content ideas for a global B2B audience.

Prompt:
"Brainstorm 5 content ideas for a series of blog posts targeting B2B decision-makers in the manufacturing industry. Each idea should be relevant and adaptable for our target markets in the USA, Germany, France, and Japan. Consider local industry trends, cultural nuances, and language preferences when generating the ideas."

Iterative Refinement

Google emphasises the importance of iterative refinement in prompt engineering, encouraging users to continuously test, analyse, and optimise their prompts based on the model's outputs. By engaging in a cyclical process of prompt design, evaluation, and refinement, you can progressively improve the quality and relevance of Gemini's responses for your specific task or domain.

Example: Continuously refine and improve generated content based on performance data and user feedback.

Prompt:
"Here is a blog post we recently published: [Insert blog post text]. Based on the user engagement data and feedback we've collected, please provide 3 suggestions for improving the post's relevance, readability, and overall impact. Consider factors such as content structure, tone, and the inclusion of additional examples or insights."

In Conclusion

While each platform offers its own unique features and capabilities, the fundamental principles of prompt engineering remain largely consistent across OpenAI, Anthropic, Meta, and Google. By understanding and applying these best practices, users can unlock the full potential of these powerful language models and generate high-quality, task-specific outputs. As the field of conversational AI continues to evolve, staying up-to-date with the latest prompt engineering techniques and platform-specific optimisations will be essential for driving innovation and achieving success in this exciting domain.

By applying these prompt engineering best practices and techniques across OpenAI, Anthropic, Meta, and Google platforms, B2B content marketers can explore the full potential of large language models to support the creation of high-quality, engaging, and impactful content. Experiment with different prompts, refine your approaches based on performance data, and continuously adapt to the evolving capabilities of these powerful tools to stay ahead in the competitive world of B2B content marketing.

See this gallery in the original post