
A study guide for Lee Boonstra's Prompt Engineering white paper
What is Prompt Engineering?
* Crafting effective prompts to guide Large Language Models (LLMs) toward accurate, useful outputs.
* It's iterative: experimenting, evaluating, and refining prompts is crucial.
Key Elements of Effective Prompt Engineering
1. LLM Output Configuration
Configure the model settings effectively:
* Output Length: More tokens = higher cost and latency.
* Temperature: Controls randomness.
* Lower temperatures (0.0 - 0.3) → More deterministic and focused results.
* Higher temperatures (>0.7) → More creative and varied outputs.
* Top-K: Limits sampling to the K highest-probability tokens.
* Top-P (nucleus sampling): Samples from top tokens until cumulative probability P is reached.
Recommended default configurations:
* Balanced results: Temperature 0.2, top-P 0.95, top-K 30.
* More creative: Temperature 0.9, top-P 0.99, top-K 40.
* Deterministic results: Temperature 0.0 (useful for math problems).
2. Prompting Techniques
Zero-shot Prompting
* Provide simple instructions without examples.
* Good for straightforward tasks.
One-shot & Few-shot Prompting
* Include one or more examples within the prompt.
* Enhances accuracy and consistency, particularly useful for complex or structured tasks.
System, Contextual, and Role Prompting
* System prompting: Defines the overall task context and constraints (e.g., format outputs as JSON).
* Contextual prompting: Offers additional context for precise results.
* Role prompting: Assigns the model a persona or role (teacher, comedian, travel guide, etc.), shaping its tone and content.
Step-back Prompting
* Start broadly, then narrow down specifics to enhance contextual accuracy.
* Helps models reason effectively.
Chain of Thought (CoT) Prompting
* Encourages LLMs to explain reasoning steps explicitly (e.g., math problems).
* Significantly improves accuracy and interpretability.
Self-consistency
* Run the same prompt multiple times at higher temperatures, then choose the most common response.
* Good for reasoning and classification tasks.
Tree of Thoughts (ToT)
* Extends CoT by simultaneously exploring multiple reasoning paths.
* Effective for complex tasks needing deep exploration.
ReAct (Reason & Act)
* Combines reasoning with external tool usage (like search engines) for better problem-solving.
* Useful for factual queries requiring external validation or data.
3. Automatic Prompt Engineering
* Automating prompt creation by prompting an LLM to generate multiple potential prompts.
* Evaluate and select the best-performing prompt using metrics like BLEU or ROUGE scores.
4. Code Prompting Techniques
* LLMs can write, explain, translate, debug, and review code.
* Clearly instruct models on desired programming languages and outcomes.
* Test and verify the generated code for correctness.
5. Multimodal Prompting
* Involves using multiple formats (text, images, audio) in prompts.
* Enhances clarity and context (dependent on model capabilities).
Best Practices for Prompt Engineering
General Tips
* Provide clear, concise instructions.
* Include relevant examples: One-shot or few-shot examples dramatically improve performance.
* Design simple prompts: Avoid overly complex language or irrelevant information.
* Be specific about outputs: Clearly state expected results (structure, format, content).
* Favor positive instructions over negative constraints.
Controlling Output
* Explicitly instruct output length or style when necessary (e.g., "Explain quantum physics in a tweet-length message").
Variables in Prompts
* Use dynamic variables to easily adapt prompts (e.g., {city} → "Amsterdam").
Input and Output Formats
* JSON is recommended for structured outputs to minimize hallucinations and increase reliability.
* JSON Schemas can help structure inputs, defining clear expectations for LLMs.
Iterative Development
* Continuously test, refine, and document prompts.
* Record prompt versions, configurations, model outputs, and feedback for reference and improvement.
Chain of Thought Specific Tips
* Always put the reasoning steps before the final answer.
* Set temperature to 0 for reasoning-based tasks to ensure deterministic responses.
Prompt Documentation
Use this structured format to document prompt attempts for easy management and future reference:
FieldDetails to includeNamePrompt name/versionGoalSingle-sentence description of the prompt’s purposeModelModel name/versionTemperatureValue (0.0 - 1.0)Token LimitNumeric limitTop-KNumeric settingTop-PNumeric settingPromptFull text of the promptOutputGenerated output(s)
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uhaid.org
Unified Health Aid Podcast
Unified Health Aid is a one-stop destination for all things research, information, grants and more in healthcare, medical devices, pharma and consumer health.
www.uhaid.org
- No. of episodes: 12
- Latest episode: 2025-04-13
- Health & Fitness Science Medicine Life Sciences