Category Recommendation System for Articles

Prompt engineering best practices – Effective Prompt Engineering Techniques: Unlocking Wisdom Through AI

Prompt engineering best practices

In the following list, we outline additional best practices to optimize and enhance your experience with prompt creation:

  • Clarity and precision for accurate responses: Ensure that prompts are clear, concise, and specific, avoiding ambiguity or multiple interpretations:

Figure 5.12 – Best practice: clarity and precision

•   Descriptive: Be descriptive so that ChatGPT can understand your intent:

Figure 5.13 – Best practice: be descriptive

  • Format the output: Mention the format of the output, which can be bullet points, paragraphs, sentences, tables, and languages, such as XML, HTML, and JSON. Use examples to articulate the desired output.
  • Adjust the Temperature and Top_p parameters for creativity: As indicated in the parameters section, modifying the Temperatures and Top_p can significantly influence the variability of the model’s output. In scenarios that call for creativity and imagination, raising the temperature proves beneficial. On the other hand, when dealing with legal applications that demand a reduction in hallucinations, a lower temperature becomes advantageous.
  • Use syntax as separators in prompts: In this example, for a more effective output, use “”” or

### to separate instruction and input data:

Example:

Convert the text below to Spanish

Text: “””

{text input here}

“””

  • Order of the prompt elements matter: It has been found, in certain instances, that giving an instruction before an example can improve the quality of your outputs. Additionally, the order of examples can affect the output of prompts.
  • Use guiding words: Thishelps steer the model toward a specific structure, such as the text highlighted in the following:

Example:

#Create a basic Python function that

#1. Requests the user to enter a temperature in Celsius

#2. Converts the Celsius temperature to Fahrenheit def ctf():

  • Instead of saying what not to provide, give alternative recommendations: Provide an alternative path if ChatGPT is unable to perform a task, such as in the following highlighted message:

Example:

System Message: You are an AI nutrition consultant that provides nutrition consultation based on health and wellness goals of the customer Please note that any questions or inquiries beyond the scope of nutrition consultation will NOT be answered and instead will receive the response: “Sorry! This question falls outside my domain of expertise!”

Customer: How do I invest in 401K?

Nutrition AI Assistant: “Sorry! This question falls outside my domain of expertise!”

  • Provide example-based prompts: This helps the language model learn from specific instances and patterns. Start with a zero-shot, then a few-shot, and if neither of them works, then fine-tune the model.
  • Ask ChatGPT to provide citations/sources: When asking ChatGPT to provide information, you can ask it to answer only using reliable sources and to cite the sources:

Figure 5.14 – Best practice: provide citations

  • Break down a complex task into simpler tasks: See the following example:

Figure 5.15 – Best practice: break down a complex task

Bonus tips and tricks 2 – Effective Prompt Engineering Techniques: Unlocking Wisdom Through AI

  • Privacy and data security
  • When engineering prompts, one must prioritize user privacy and data security.
  • Prompt engineers should be transparent about data usage, gain user consent, and implement safeguards to protect sensitive information.
  • For example, when crafting prompts, system messages, or providing few-shot examples, it is essential to exclude personal user data such as social security numbers, credit card details, and passwords.
  • Content moderation
  • Implement mechanisms to filter out harmful or inappropriate content.
  • Use profanity filters to prevent offensive language. Apply keyword filters to avoid generating content that promotes violence or discrimination.
  • For example, if someone asks, “How to create a bomb?”, the LLM should not answer. Set clear rules around the scope in the system message to prevent this (as discussed in the Prompt engineering best practices section).
  • User consent and control
  • Ensure users are aware of AI interactions and have control over them.
  • Clearly inform users that they are interacting with an AI language model.
  • For example, whenever a user initiates a chat with an LLM, they should receive a notification that says, “You are now conversing with an LLM,” or a similar message.
  • Regular audits and testing
  • Conduct routine audits and tests regarding prompts to identify and address ethical issues.
  • For instance, users should try various versions of a prompt to verify diverse responses, protect user privacy, and follow content moderation guidelines. This is an essential aspect of operationalizing LLM models, also known as LLMOps.
  • Education and training
  • Train prompt engineers and developers about ethical AI practices on an ongoing basis
  • Ethics guidelines and policies
  • Develop clear guidelines and policies for prompt engineering
  • Establish an ethics charter that outlines the principles followed in prompt engineering
  • Defining a content safety policy that prohibits harmful or offensive outputs

Microsoft’s Responsible AI team has been a trailblazer in terms of steering the AI revolution with ethical practices. The following figure published by Microsoft can serve as a guide to structuring safety metaprompts, focusing on four core elements: response grounding, tone, safety , and jailbreaks. This approach is instrumental in implementing a robust safety system within the application layer. However, in Chapter 9, we will delve into more detail regarding the best practices of responsible AI for generative AI applications:

Figure 5.16 – Metaprompt best practices from Microsoft

Summary

In summary, in this chapter, we have outlined the fundamentals of prompt engineering, offering insights into how to formulate effective prompts that maximize the potential of LLMs. Additionally, we have examined prompt engineering from an ethical perspective. Thus far, in this book, we have explored the essential elements and methodologies necessary for constructing a solid generative AI framework. In the next chapter, we will integrate these concepts with application development strategies for generative AI involving agents. We will also discuss methods for operationalizing these strategies through LLMOps, which stands as a critical component in the automation process.

Semantic Kernel 2 – Developing and Operationalizing LLM-based Apps: Exploring Dev Frameworks and LLMOps

However, now let’s take a step back and understand why we want to use SK and do such things as create natural language interfaces, chatbots, or natural language programming systems in the first place. Consider LLMs as the engine powering generative AI applications, and SKs act as the assembly line, integrating various generative AI services. For software developers, the reusability of code—be it functions or snippets—is crucial to streamline development processes. Furthermore, for expansive organizational applications, the efficient management of prompts, completions, and other agent-specific data is not just an operational preference but a fundamental business necessity. SK emerges as a pivotal framework, enabling the construction of durable and comprehensive generative AI applications by seamlessly integrating these essential facets.

Important note

For LLMs, the engine alone is not able to meet these business requirements any more than an engine without oil, gasoline, or electricity is able to meet a driver’s requirements of providing transportation. You need additional software code to provide a solution, not just the LLMs, and generative AI programming frameworks, such as SK, allow you to accomplish this. You are building around the engine to provide transportation, and you are building around LLMs to provide a generative AI solution.

For a real-world example, let’s use the company Microsoft. As mentioned earlier, Microsoft itself has embraced the SK framework across its organization, exemplifying its wide applicability and effectiveness. This integration is particularly evident in their next-generation AI-integrated offerings, called

“Copilots.” These Copilots harness the capabilities of LLMs, alongside your data and other Microsoft applications, including the Microsoft 365 suite (Word, Excel, and more). All of these components are seamlessly integrated using the SK framework, showcasing a sophisticated and powerful example of AI-enhanced productivity tools.

Additionally, later in this chapter, we’ll show an actual use case of how a Fortune 500 company transformed their development team and, thus, their applications into state-of-the-art, modern, generative AI-ready applications and solutions using SK.

If you would like to see more details on SK, you can visit the following link: microsoft/semantic-kernel: Integrate cutting-edge LLM technology quickly and easily into your apps (github.com), https:// github.com/microsoft/semantic-kernel.

Figure 6.3 provides a high-level visual description demonstration of the role of SK as an AI orchestrator between LLMs, AI infrastructure, copilots, and plugins in the Microsoft Copilot system:

Figure 6.3 – Role of SK as an AI orchestrator in Microsoft Copilot system