Prompt Engineering Guide
Techniques
LLM settings

LLM Settings

When working with prompts, you interact with the LLM via an API or directly. You can configure a few parameters to get different results for your prompts.

Temperature - In short, the lower the temperature, the more deterministic the results in the sense that the highest probable next token is always picked. Increasing temperature could lead to more randomness, which encourages more diverse or creative outputs. You are essentially increasing the weights of the other possible tokens. In terms of application, you might want to use a lower temperature value for tasks like fact-based QA to encourage more factual and concise responses. For poem generation or other creative tasks, it might be beneficial to increase the temperature value.

Top_p - Similarly, with top_p, a sampling technique with temperature called nucleus sampling, you can control how deterministic the model is at generating a response. If you are looking for exact and factual answers keep this low. If you are looking for more diverse responses, increase to a higher value.

The general recommendation is to alter one, not both.

System message: - This message provides the initial instructions to the model. You can provide various information in the system role including:

  • A brief description of the assistant
  • Personality traits of the assistant
  • Instructions or rules you would like the assistant to follow
  • Data or information needed for the model, such as relevant questions from an FAQ You can customize the system role for your use case or just include basic instructions. The system role/message is optional, but it's recommended to at least include a basic one to get the best results.

Before starting with some basic examples, keep in mind that your results may vary depending on the version of LLM you use.