LLM Prompting Techniques
A collection of effective prompting techniques for LLMs. This is was just a fun project to build an LLM agent that looks for interesting prompting techniques. After you found a technique you like, research it yourself.
Zero-shot prompting
A technique where the prompt provides a clear instruction or task description without any examples. The LLM generates output based solely on the prompt and its training.
save tokens, simple tasksConversational Prompt Engineering (CPE)
CPE uses an interactive chat interface where the user and an LLM collaboratively create and refine prompts through iterative dialogue. The user provides input examples and output preferences, discusses requirements via chat, reviews outputs generated by a target model, and provides feedback for prompt refinement until a satisfactory prompt is produced. This method simplifies personalized prompt creation without needing labeled data.
improve output qualityRole Prompting
Zero-shot prompting technique where a specific persona or role is assigned to the model in the prompt (e.g., acting as 'a shepherd'), which can improve style and output quality, especially in open-ended tasks.
improve output qualityPrompt Chaining
Prompt Chaining is a technique where complex tasks are broken into subtasks, with each subtask handled by a separate prompt. The output of one prompt is fed as input to the next, forming a chain of prompt operations. This approach improves reliability, transparency, controllability, and performance of LLM applications, especially useful in conversational assistants and personalized user experiences.
improve output qualityAutomatic Chain-of-Thought (Auto-CoT) Prompting
Automatically generates chain-of-thought exemplars from zero-shot prompts to build a few-shot CoT prompt, enhancing reasoning performance.
improve reasoningInstruction tuning
Instruction tuning involves training the model on a dataset of prompts and desired outputs to make it better at following instructions. It generally enhances the model's ability to understand and execute user instructions accurately.
improve output qualitySelf-Consistency Prompting
Self-Consistency prompting improves reasoning accuracy by generating multiple diverse reasoning paths and selecting the most consistent answer through majority voting. This reduces variability and errors compared to relying on a single chain of thought and is effective for complex reasoning tasks. The technique aggregates several reasoning outputs to improve reliability.
improve reasoningRetrieval-Augmented Generation (RAG)
RAG combines document retrieval with text generation by incorporating relevant external documents into the prompt input. By grounding responses in retrieved knowledge, this approach improves factual accuracy and reduces hallucinations. It handles input queries via vector search or other retrieval systems to supplement the model's parametric knowledge.
reduce hallucinationsRole-playing
Role-playing instructs the AI to assume the persona of an expert, celebrity, or character to tailor responses to a specific domain or perspective. It leverages the AI's broad knowledge to mimic a chosen role's style and expertise, producing more relevant and contextual outputs. This technique is useful for eliciting technical, creative, or narrative-driven responses.
improve output qualityRetrieval Augmented Generation
Retrieval Augmented Generation (RAG) integrates external retrieval systems with LLMs by retrieving relevant documents or passages which are then used as context for generation. This technique reduces hallucinations and allows LLMs to incorporate up-to-date or specific information not contained in their training data.
reduce hallucinationsFew-Shot Learning (FSL)
A broader machine learning paradigm involving adapting model parameters with a few examples, different from Few-Shot Prompting which only modifies prompts without parameter updates.
improve output qualityStep-Back Prompting
A Zero-Shot-CoT modification where the LLM is first asked high-level questions about relevant concepts before reasoning, improving multiple benchmark performances.
improve reasoningSelf-Ask
Zero-shot prompting technique where the model decides if follow-up questions are needed, generates them, answers them, then answers the original question to improve reasoning outcomes.
improve reasoningZero-Shot Chain-of-Thought (Zero-Shot-CoT)
A CoT variant without exemplars, appending a thought-inducing phrase such as 'Let's think step by step' to induce the model to generate intermediate reasoning.
improve reasoningReflexion
A technique where the model reflects on its previous outputs to improve future responses.
self-improvementPrompt Tuning
A method where prompts are tuned or optimized for specific tasks, often involving learnable prompt embeddings, to improve task performance without modifying the entire model.
improve output quality and task specificityMultimodal CoT
Multimodal Chain-of-Thought prompting extends CoT to multiple modalities such as images and text, allowing the model to reason across different input types. This technique broadens the LLM’s applicability and cognitive capabilities.
improve reasoningTree of Thoughts (ToT)
Tree of Thoughts is a framework that enables language models to explore multiple reasoning paths like a tree, using lookahead, backtracking, and evaluation of intermediate steps for complex problem solving. It generalizes Chain-of-Thought prompting by maintaining a search tree of coherent reasoning sequences. This method combines language generation with search algorithms to enhance planning and strategic reasoning.
improve reasoningIterative Refinement
Iterative refinement involves multiple prompting rounds to progressively improve the LLM's outputs. By providing feedback and guiding revisions at each stage, this technique enhances accuracy and polish, particularly for writing or creative tasks. Clear, specific feedback is key for effective refinement.
improve output quality via revisionProvide Context When Possible
Giving relevant background information or explaining the purpose behind a prompt improves response accuracy, especially for complex or specific topics. Context helps the LLM generate outputs tailored to the user’s exact needs. Omitting context or using vague requests reduces relevance of responses.
improve output qualityRephrase and Respond (RaR)
LLM is instructed to rephrase and expand the question before responding, which can be done in one or two passes, improving multiple benchmark performances.
improve reasoningDemonstration Ensembling (DENSE)
Creates multiple few-shot prompts with different exemplar subsets and aggregates their outputs to produce a final answer, reducing variance and improving accuracy.
improve output qualitySelf-Ask Prompting
Self-Ask prompting involves asking follow-up questions related to the initial query, effectively decomposing complex questions into simpler sub-questions. When combined with Chain of Thought, it helps the model gather additional information step-by-step to arrive at a more accurate answer.
improve reasoningTree of Thoughts (ToT) Prompting
Tree of Thoughts prompting enables exploration of multiple reasoning paths in a tree-like structure, allowing multiple solution attempts, path evaluation, and backtracking when needed. It excels in creative problem-solving, mathematical reasoning, and solving complex puzzles by systematically considering alternative paths.
improve reasoningChain-of-Verification (COVE)
Generates an answer, creates related verification questions, answers these questions, then uses the info to produce a revised, more accurate answer with LLM.
improve reasoningSelf Consistency
Self Consistency is a decoding strategy that improves chain-of-thought prompting by sampling a diverse set of reasoning paths and selecting the most consistent answer among those. Instead of using greedy decoding, multiple reasoning traces are generated and the final answer is chosen by marginalizing over these samples, improving accuracy in complex reasoning tasks.
improve reasoning, improve output qualityReAct Prompting
ReAct prompting interleaves reasoning traces and task-specific actions within the same output, enabling models to update action plans and handle exceptions dynamically. This synergy enhances interpretability and task performance across language and decision-making tasks, effectively reducing hallucinations through interaction with external environments like Wikipedia APIs.
reduce hallucinations; improve reasoning and interactionStructured Chain-of-Thought (SCoT) prompting
Builds upon CoT by leveraging explicit program structures such as sequence, branch, and loop to generate intermediate reasoning steps, aligning reasoning more closely with actual programming logic.
improve reasoningProgram-aided Language Models (PAL)
PAL enables a language model to generate and execute external code representations of reasoning steps, such as Python programs or symbolic solvers. This allows models to perform precise calculations, verify logic by running programs, and improve reasoning accuracy. However, this approach depends on integration with external computational tools, which may limit scalability.
improve reasoningSystem 2 Attention (S2A)
Zero-shot prompting method where the LLM first rewrites the prompt to remove unrelated information, then a second LLM retrieves the final response using the cleaned prompt, enhancing focus on relevant info.
reduce hallucinationsRe-reading (RE2)
Adds a phrase instructing the model to read the question again in the prompt, along with repeating the question; improves reasoning performance, especially on complex questions.
improve reasoningProgram-of-Thoughts
Generates programming code as reasoning steps, executed by a code interpreter to provide answers, excelling in math and programming domains but less in semantic reasoning.
improve reasoningAutomatic Prompt Engineer (APE)
Generates multiple candidate zero-shot prompts using exemplars, scores and iteratively refines them via prompt paraphrasing until desired criteria are met.
improve output qualityTree-of-Thoughts (ToT) Prompting
Tree-of-Thoughts (ToT) prompting expands on Chain-of-Thought by managing a tree structure of intermediate reasoning steps, enabling exploratory search with look-ahead and backtracking. It systematically evaluates multiple reasoning paths to solve complex tasks requiring exploration, greatly increasing success rates on problems like Game of 24 and word puzzles compared to linear CoT.
improve reasoningSelf-Criticism
This technique involves instructing the model to evaluate its own response and improve upon it. It helps in reducing errors and hallucinations.
improve output qualityAgents
Enabling models to act as autonomous agents that can use tools, reason, and make decisions.
autonomous reasoningTask Decomposition
Task decomposition breaks complex tasks into smaller, manageable subtasks for the LLM to address sequentially. This reduces cognitive load and improves accuracy on intricate problems by focusing the model on discrete components. It fosters coherent final outputs and facilitates error identification and correction.
handle complex tasks, improve output qualityStructure the Prompts
Organizing prompts clearly using bullet points, headings, or numbering helps the LLM to focus and understand different parts of the query better. Structured prompts guide the model to respond in a well-organized manner, such as producing bullet lists or separating information under subheadings. Avoid unstructured multi-part prompts that can confuse the model and lead to incomplete answers.
improve output qualityChain-of-Thought (CoT) Reasoning
Chain-of-Thought prompting involves breaking down a problem into a series of intermediate reasoning steps. This step-by-step approach helps large language models solve complex problems by generating intermediate logical steps before the final answer. It improves accuracy in multi-step problem-solving tasks like mathematics, logical reasoning, and commonsense inference.
improve reasoningReinforcement Learning from Human Feedback (RLHF)
RLHF trains LLMs using human feedback to align model outputs with human preferences, improving logical consistency and reducing errors. It involves training a reward model from human rankings and optimizing the base model via reinforcement learning algorithms like PPO. RLHF enables iterative refinement of reasoning through human-guided objectives.
improve reasoning accuracyDSPy (Dynamic Structured Prompting in Python)
DSPy is an open-source, code-first framework for creating and managing complex prompt pipelines programmatically. It treats LLM calls as modular components enabling multi-step processes such as content generation, user feedback integration, scoring, and evaluation to iteratively refine outputs across modules. DSPy supports adaptive, logic-driven workflows that improve over time through structured interactions and user feedback.
improve output qualityChain of Thought Prompting (CoT)
Chain of Thought Prompting enables complex reasoning capabilities by encouraging the model to generate intermediate reasoning steps before providing the final answer. It can be applied in zero-shot or few-shot formats to improve performance on arithmetic, symbolic, and logical reasoning tasks. The model is prompted to think step by step to arrive at a solution, significantly reducing errors stemming from skipping reasoning.
improve reasoningPrompt Mining
Discover optimal prompt template components by analyzing large corpora to find more frequently occurring formats or phrases that improve prompt performance, effectively optimizing middle words in prompts.
improve output qualityCumulative Reasoning
Generates potential intermediate steps, evaluates and accepts/rejects them, checks if the final answer is present, repeating as necessary to improve reasoning accuracy.
improve reasoningSkeleton-of-Thought (SoT) Prompting
SoT prompting involves providing a structured high-level template or skeleton for the model output. The model then fills in each section carefully, ensuring completeness and adherence to a desired format. This method aids in generating well-organized and balanced responses without excessive or irrelevant content.
improve output qualityIn-Context Prompting
In-Context Prompting uses previous inputs and outputs within the context window to maintain memory over multiple interactions. This technique allows the model to recall past conversation details, improving coherence and relevance in extended dialogues by leveraging attention mechanisms.
improve output qualityPersona Pattern
This prompting technique involves instructing the LLM to act as a specific persona and perform tasks accordingly to generate tailored responses. By specifying a role, such as a detective or a personal trainer, the model outputs answers that align with the persona's perspective and expertise. This approach enhances contextual relevance and creativity in the output.
improve output qualityEmotionPrompt
EmotionPrompt leverages psychological emotional stimuli by placing the model in a situation akin to high pressure, compelling it to perform correctly. This technique is based on the premise that LLMs possess a form of emotional intelligence and that emotional cues can enhance their performance. Experiments show that this can increase their performance by approximately 10%, although the exact mechanisms are still under discussion.
improve output qualityStep-by-step instructions
Breaking down complex tasks into sequential steps to guide the AI through a logical process, resulting in clearer and more accurate responses.
improve reasoningSequential Prompting
Sequential prompting involves building a conversation by creating prompts that build upon previous responses. This technique is useful for complex tasks that require refinement or expansion over multiple interactions.
improve reasoningRecursive Criticism and Improvement (RCI)
A prompting technique that involves iterative critique and improvement of the generated code to mitigate security weaknesses. It prompts the LLM to critically evaluate its own output and improve upon it in multiple iterations.
improve output qualityOpenAI Prompt Engineering Guidelines
Rules from OpenAI include writing clear instructions, providing reference text, splitting complex tasks into simpler subtasks, and allowing the model time to 'think' to improve prompt effectiveness.
improve output qualityTree-of-Thought Prompting
Tree-of-Thought prompting encourages the model to explore multiple reasoning pathways or potential solutions before converging on a final answer. This branching approach allows considering various dimensions and outcomes of a problem, which is especially advantageous for tasks with complex or multifaceted solutions. It leads to richer, more nuanced responses by evaluating alternatives before deciding.
improve reasoningRole-based Prompting
Role-based prompting involves instructing the model to adopt a specific professional or functional role (e.g., scientist, teacher) when generating responses. This contextual framing influences the model's tone, style, and depth, resulting in answers that align with the designated role's expertise and perspective. It is effective for domain-specific explanations or tailored communication.
improve output qualityThe audience is ...
Informing the LLM about the intended reader or audience helps the model generate more targeted and appropriate responses tailored to that audience's understanding or needs.
improve output qualityProvide examples
Few-shot prompting involves including example inputs and outputs inside the prompt to guide the model's behavior without additional training, effectively demonstrating the expected task format and improving in-context learning.
improve output qualityFormat your prompt
Use clear and consistent formatting such as delimiters ('###') to structure prompts into sections like instructions, examples, and questions. This enhances readability and helps the model distinguish context. For example: ###Instruction### Provide an overview of Python. ###Example### Python is known for its simplicity. ###Question### What are common uses of Python?
improve output qualityUse delimiters
Clarify different sections of prompts using distinct delimiters or markers such as '###Task###' to improve prompt parsing and structure. Example: '###Task### Write a summary. ###Details### Focus on character motivations.'
improve output qualityCombine techniques
Integrate multiple prompting strategies like Chain-of-Thought (CoT) reasoning with few-shot prompting for more sophisticated and effective prompts. Example: 'Think step by step about how to create a business plan. Here’s an example of what to include: executive summary, market analysis, etc.'
improve reasoningUtilize output primers
End prompts with the anticipated start of the output to guide the model towards the expected response format or content. For example, 'Explain the importance of biodiversity. Biodiversity is important because…'
improve output qualityMimic provided samples
Guide the model to replicate the style and language of provided text samples by including instructions such as, 'Please use the same language based on the provided text.'
improve output qualityBe Clear and Specific
This technique emphasizes creating prompts that are specific and clear to help the LLM understand the requirements precisely, leading to more accurate and relevant outputs. Avoid vague and overly broad questions that can produce a wide range of unrelated responses. Being specific guides the model’s understanding to ensure outputs align with user needs.
improve output qualityZero-Shot Learning
This technique involves giving the AI a task without any prior examples. You describe what you want in detail, assuming the AI has no prior knowledge of the task. It is useful for tasks where you expect the AI to understand the request from the instruction alone.
improve output qualityAdjust the LLM’s temperature for creativity and consistency
The temperature setting controls creativity and predictability of responses. Lower temperatures produce more focused and consistent answers; higher temperatures yield more creative but sometimes unpredictable outputs.
improve reasoningControl the length and detail of responses
Explicitly guiding the LLM on response length and detail can help tailor outputs to your needs—requesting detailed or concise answers as appropriate for the context.
improve output qualityKnowledge Distillation
Knowledge Distillation compresses the natural language information of hard prompts into soft prompts by training a student model to mimic a teacher model's output distribution. It helps reduce prompt length while maintaining model performance by guiding the student model using outputs from a better-performing teacher model, often optimizing via Kullback-Leibler divergence loss.
prompt compressionChain-of-Logic (CoL)
Chain-of-Logic is a structured prompting technique designed specifically for complex rule-based reasoning tasks. It focuses on the logical relationships between components, making it especially effective for legal reasoning and other rule-based decision-making scenarios. This technique provides interpretable decisions based on logical relationships.
improve reasoningSelf-Generated In-Context Learning (SG-ICL)
Automatically generate exemplars using a generative AI to improve Few-Shot prompting performance when training data is unavailable; generated exemplars are less effective than actual data.
improve output qualityStyle Prompting
Zero-shot prompting technique where styles, tones, or genres are specified in the prompt to modify the output style, similar to role prompting.
improve output qualitySimToM
A two-prompt zero-shot approach modeling multiple perspectives, by extracting facts known to one person in a question and answering based solely on these facts, to reduce irrelevant information impact.
reduce hallucinationsThread-of-Thought (ThoT) Prompting
Uses an improved thought inducer like 'Walk me through this context in manageable parts step by step' to enhance CoT reasoning over large, complex contexts in QA and retrieval.
improve reasoningUncertainty-Routed Chain-of-Thought Prompting
Samples multiple CoT reasoning paths and selects majority if above a threshold, otherwise selects greedy response; improves benchmark performance for GPT4 and Gemini Ultra.
improve reasoningRecursion-of-Thought
Recursively solves sub-problems within a reasoning chain by invoking additional prompts, allowing for solving problems otherwise limited by prompt context length.
improve reasoningFaithful Chain-of-Thought
Generates CoTs mixing natural and symbolic (e.g., Python) languages, using task-dependent symbolic languages to improve reasoning.
improve reasoningSkeleton-of-Thought
Accelerates answers via parallelization by generating a skeleton of sub-problems then solving them in parallel and concatenating outputs.
improve efficiencyConsistency-based Self-adaptive Prompting (COSP)
Constructs Few-Shot CoT prompts by running Zero-Shot CoT with Self-Consistency and selecting highly agreeing outputs as exemplars, then re-applies Self-Consistency.
improve reasoningSelf-Calibration
LLM is prompted to answer a question, then to judge if the answer is correct; helps estimate model confidence for decision making on accepting/revising responses.
improve output qualitySelf-Refine
Iterative framework where the LLM critiques its previous answer, then revises it based on feedback, continuing until a stopping condition is met, improving reasoning and generation tasks.
improve output qualityReversing Chain-of-Thought (RCoT)
The LLM generates a reconstructed problem from an answer, compares it with the original, identifies inconsistencies, and uses feedback for answer revision, enhancing correctness.
improve reasoningGradientfree Instructional Prompt Search (GrIPS)
Performs edit-based operations such as deletion, addition, swapping, and paraphrasing to create prompt variations for optimizing prompts without gradients.
improve output qualityPrompt Optimization with Textual Gradients (ProTeGi)
A multi-step pipeline passing batch inputs and outputs through a critique prompt, generating new prompts and selecting them via bandit algorithms to optimize prompt templates.
improve output qualityRLPrompt
Uses a frozen LLM and an unfrozen module to generate prompt templates, scores them, and updates the module using Soft Q-Learning, often selecting non-grammatical optimal prompts.
improve output qualityDialogue-comprised Policy-gradient-based Discrete Prompt Optimization (DP2O)
Complex reinforcement learning method involving policy gradients, custom prompt scoring, and interactive dialogues with an LLM to construct prompts.
improve output qualityVerbalizer
A component of answer engineering that maps output tokens or spans to labels in labeling tasks, facilitating consistent interpretation of model outputs.
improve output qualityShow-me versus Tell-me Prompting
This method instructs the model to either demonstrate (show) or describe (tell) concepts depending on the user’s information needs. It helps tailor responses to preferred output formats, such as diagrams or textual explanations.
improve output qualityTarget-your-response (TAR) Prompting
TAR prompting focuses the model’s output on specific targets or objectives, clarifying response style and format to improve relevance and brevity. It involves explicitly indicating desired length, style, or detail level.
improve output qualitySelf-reflection Prompting
Self-reflection prompting has the model critically evaluate its own outputs and revise answers based on introspection. This iterative review improves quality and thoughtfulness, especially for complex or ethically nuanced questions.
improve output qualityPrompt to Code
This technique instructs the model to generate functional programming code according to user-specified requirements. It leverages the model's programming knowledge to produce code snippets in desired languages and formats.
improve output qualityChain-of-Knowledge (CoK) Prompting
CoK breaks down complex tasks into coordinated steps involving reasoning preparation and dynamic knowledge adaptation from various sources, including internal knowledge, external databases, and prompts. This systematic approach addresses factual hallucinations and improves reasoning by grounding model outputs in diverse, adaptable knowledge.
reduce hallucinations; improve reasoningChain-of-Code (CoC) Prompting
CoC improves language model reasoning by formatting semantic sub-tasks as pseudocode, enabling code emulation for logic and semantics. This 'think in code' approach reduces errors and enhances accuracy on challenging reasoning benchmarks, including BIG-Bench Hard, outperforming CoT and other baselines.
code generation and execution; improve reasoningRephrase and Respond (RaR) Prompting
RaR addresses differences in human and LLM thought framing by allowing LLMs to rephrase and expand questions within a single prompt, improving comprehension and response accuracy. This two-step approach enhances semantic clarity and reduces ambiguity across diverse tasks.
understanding user intentQuestion Refinement Pattern
This prompt improvement technique suggests better or clearer versions of user questions to improve answer quality. It can optionally prompt the user to accept the refined question before proceeding, enhancing accuracy and relevance.
improve output qualityCognitive Verifier Pattern
This technique requires the LLM to follow a set of rules when answering questions, generate additional clarifying subquestions, and combine their answers to produce a more accurate final response. It acts as a mechanism to verify and refine answers for complex queries.
reduce hallucinationsGame Play Pattern
This pattern involves using prompts to create or engage in games with defined rules, themes, and objectives. It enables interactive and entertaining exchanges while encouraging knowledge or skill testing.
improve output qualityReAct (Reasoning + Acting)
ReAct is a prompting framework that enables LLMs to generate reasoning traces (thoughts) and perform task-specific actions by alternating between thinking and acting steps until reaching a final answer. This allows interpretation tracking, plan updates, exception handling, and interaction with external environments or knowledge bases. Though powerful and more interpretable, it involves higher costs and can be prone to derailing away from the main task.
improve reasoningReflexion and Self-Reflection
This method involves prompting the model to self-reflect, such as asking "Are you sure?" after an answer, to encourage re-evaluation and potentially better responses. The Reflexion framework further maintains an episodic memory buffer of reflective text induced by feedback to enhance future decision-making. The compatibility with other techniques like CoT, ReAct, and ReWOO make it powerful, though risks exist of reinforcing hallucinations; hence, use case testing is recommended.
improve output qualityStep-By-Step Reasoning (SSR)
SSR guides the model to break down complex problems into a sequence of intermediate reasoning steps to arrive at an answer. This systematic approach is helpful for tasks requiring multi-step thinking such as arithmetic, logical reasoning, or complex decision-making.
improve reasoningTree of Thought (ToT)
ToT represents reasoning as a tree structure where multiple possible thought processes (branches) are explored. Each branch represents a different line of reasoning or assumption to explore diverse possibilities and evaluate plausibility. It's effective for multi-stage reasoning or problems with multiple potential solutions.
improve reasoningRetrieval-Augmented Generation (RAG) Prompting
Involves incorporating external data or documents into prompts to augment the AI's knowledge base dynamically, enabling more specific and informed responses.
augment knowledge baseContext and background inclusion
Provides background information or parameters to help the AI generate more accurate and relevant responses. Examples include listing top programming languages or role-based scenarios.
improve reasoningContextual priming
Providing relevant context or background information in the prompt helps the model better understand the task and generate more accurate, coherent responses.
improve output relevanceAdd Context for Better Results
Provides additional background, instructions, or guidelines to help the AI understand the task better, which leads to more accurate and relevant outputs. It involves supplementing the prompt with necessary details.
improve output qualitySpecific Prompting
Crafting prompts with precise, detailed instructions to guide AI responses. It involves including relevant details, constraints, and desired formats to improve response quality.
improve output qualityThought Generation
This approach prompts the model to generate a series of thoughts or reasoning steps before arriving at the final answer. It encourages the model to think through the problem.
improve reasoningSplit complex tasks into simpler ones
Breaking down complicated prompts into smaller, manageable steps prevents overwhelm and enhances response quality. It allows the AI to focus on one aspect at a time and leads to more accurate, detailed, and organized outputs.
complex task decompositionSpecifying Target Audience
Include the intended audience details in the prompt to tailor responses appropriately, such as explaining complex concepts to beginners or simplifying for children.
improve reasoningUse XML tags
Structuring prompts with XML tags to delineate sections or instructions.
improve output qualityPrefill Claude's response
Pre-filling responses or parts of prompts to control the output.
improve output qualityEmbeddings
Textual inversion embeddings act as keywords that modify the style or attributes of generated images, allowing for style transfer or specific attribute emphasis.
enhance style, customize outputsFew-shot pattern
Provides the LLM with example input-output pairs to teach it the task at hand, leveraging in-context learning for better task performance.
save tokens, improve reasoningPrompt structuring
Designing prompts by starting with defining the role, providing context/input data, and then giving the instruction to create a clear and logical flow.
improve output qualitytaxonomy of prompting techniques
A classification system that categorizes various prompt engineering techniques, providing a structured understanding of the field.
organize and categorize prompting methodsSystem Prompting
System Prompting sets the context and overarching goal for the LLM, guiding its behavior throughout the interaction. It defines the model’s role or constraints, such as instructing it to generate JSON. This technique is also used to enforce safety and tone instructions.
guide model behaviorPrompt Template
A predefined template used to structure prompts for LLMs. It standardizes how prompts are composed, often including placeholders for dynamic content.
improve output qualityDirective
Specific instructions embedded within prompts that guide the model's response, ensuring clarity and focus.
improve reasoning and output qualityCompare . . .
Asks the AI to analyze and contrast two or more items, highlighting similarities and differences. Commonly used for comparison essays or decision making.
comparison analysisPrompt Engineering / Prompt Design
The deliberate construction of prompts to steer the model's responses in a desired direction, such as clearer, more accurate, or more creative outputs.
improve output qualityAtom-of-Thoughts
A prompt engineering technique that decomposes a complex problem into independent, atomic sub-questions, which are solved individually and then merged to form a comprehensive answer. Inspired by the principles of Markovian reasoning, it aims to enhance reasoning efficiency, accuracy, and scalability in LLMs.
improve reasoningChain of Draft
A prompting strategy inspired by human cognitive processes that emphasizes generating minimalistic yet informative intermediate reasoning outputs, focusing only on essential information to solve tasks. It aims to reduce verbosity and token usage while maintaining or improving reasoning accuracy.
improve output qualityFew-shot learning with demonstrations
A variation of few-shot prompting where explicit demonstrations are included in the prompt to teach the model the task explicitly through examples. This can improve the model's understanding and performance.
improve output qualityPrompt Structure and Clarity
Integrate the intended audience in the prompt to guide the LLM's response via clear and structured prompt formulation within the broader guidance of prompt principles.
improve output qualitySpecificity and Information (Few-shot Prompting)
Use example-driven prompting by including few-shot examples in the prompt to provide the model with relevant context and improve quality of response.
improve output qualityUser Interaction and Engagement
Allow the model to ask precise details and requirements iteratively until it has enough information to provide the needed response, enhancing clarity and accuracy.
improve output qualityContent and Language Style
Instruct the LLM on the desired tone and style of the response to match audience expectations and enhance readability and appropriateness.
improve output qualityComplex Tasks and Coding Prompts (Stepwise Decomposition)
Break down complex tasks into a sequence of simpler steps via multiple prompts to allow better understanding and manageable processing by the LLM.
improve reasoningPrompt Elements Framework (Instruction, Context, Input Data, Output and Style Format)
A prompt should consist of distinct elements such as clear instructions, context information, input data to respond to, and output format/style instructions, including role definitions to guide LLM behavior.
improve output qualityCO-STAR Prompt Framework
A practical and simplified six-element prompt framework: Context (background info), Objective (task definition), Style (writing style), Tone (attitude), Audience (intended reader), and Response (format and style) to craft effective prompts yielding concise and relevant LLM responses.
improve output qualityNo need to be polite with LLMs
When prompting large language models, using polite phrases like 'please' or 'thank you' does not affect the model's response. It is more efficient to get straight to the point without unnecessary politeness to save tokens and improve clarity.
save tokensInclude affirmations
Using affirmative words such as 'do' or negatives such as 'don't' can clearly guide the model towards desired or undesired behaviors, making outputs more aligned with the prompt's intentions.
improve output qualityPrompts to receive a clear/deep explanation on a topic
Framing prompts for explanations in simple terms or targeted to specific knowledge levels makes LLMs generate understandable and accessible explanations, for example, 'Explain like I’m 11 years old' or 'Explain as a beginner in X'.
improve output qualityTip the model
Statistical observations suggest that pretending to tip the model a monetary amount can motivate it to provide better quality responses. Higher tip amounts tend to encourage better outputs according to this unconventional approach.
improve output qualityBe “strict”
Using authoritative phrases such as 'your task is' or 'you MUST' clarifies the model’s priorities and tasks, helping it understand instructions with higher importance and focus.
improve output quality“Threaten” the model
Analogous to tipping, threatening the model with penalties for undesired outputs can influence it to avoid certain results or behaviors, guiding the output quality though this technique is unconventional.
improve output qualitySet the tone
Specifying the tone of voice or style, e.g., 'Answer in a natural, human-like manner', helps generate text that aligns with the intended mood or persona, enhancing realism and engagement.
improve output qualityLead the model
Encouraging the model to 'think step by step' prompts it to solve problems or explain concepts in a logical, sequential manner, improving reasoning and clarity especially for complex tasks.
improve reasoningAvoid biases
Explicitly instructing the model to avoid bias or stereotypes by including instructions like 'Ensure your answer is unbiased and doesn’t rely on stereotypes' reduces prejudiced or unfair outputs.
reduce hallucinationsLet the model ask you questions
Allowing the model to ask clarifying questions to gather more information before generating output promotes precision and relevance in responses by refining user intent iteratively.
improve output qualityLet the model test your understanding
Prompting the model to teach a subject including a test at the end (without answers) and then prompting it to check your answer engages interactive learning and verifies user comprehension.
improve reasoningRepeat a specific phrase multiple times
Repeating key phrases or themes in the prompt emphasizes focus points for the model, guiding attention and increasing the relevance of output towards those elements.
improve output qualityLet the model know you need a detailed response
Explicitly requesting detailed outputs, for example 'Write a detailed essay on X including all necessary information', guides the model to produce comprehensive content rather than brief summaries.
improve output qualityCorrect/change a specific part in the output
Instructing the model to revise specific portions of generated text, for example to improve grammar and vocabulary without changing style, refines output quality while preserving tone.
improve output qualityFor complex coding prompts that may be in different files
When generating code spanning multiple files, instruct the model to output a runnable script that creates or modifies these files, automating multi-file code generation and integration.
improve output qualityInclude specific words
Providing the model with starting words, lyrics, or sentences to continue ensures consistency in flow and style, useful in creative tasks like songwriting or storytelling.
improve output qualityPrompts for long essays
Prompting the model to mimic the style and language of example texts or essays helps generate coherent long-form content aligned stylistically with provided samples.
improve output qualityRole-Specific Prompts
Role-specific prompts involve framing the prompt to instruct the model to act as a particular expert or role, which enhances performance for specialized applications. For example, prompting the model with 'Act as a neuroscientist' can significantly improve results in domain-specific tasks. This subtle prompt structuring technique leverages the model's knowledge in specific contexts to improve accuracy and relevance.
improve output qualityPrecision Framing
Precision framing is a technique where prompts are specifically designed to elicit the most pertinent and accurate responses by providing wider context and narrowing the focus of the query. Research shows that precise prompts outperform vague ones in relevance by over fifty percent. This method improves output quality by ensuring the model understands exactly what is required to generate useful responses.
improve output qualityContextual Layering
Contextual layering involves integrating domain-specific knowledge or jargon into prompts to guide the model’s understanding for improved output. For example, including medical terms or specific conditions in medical applications has been shown to increase diagnostic accuracy by 28%. This technique tailors the prompt content to enhance relevance and accuracy in specialized domains.
improve output qualityBehavioral Conditioning
Behavioral conditioning uses multiple iteration cycles (5-7) to refine prompts progressively until high satisfaction rates are reached in enterprise applications. This approach demonstrates the effectiveness of iterative prompt engineering to produce reliable AI solutions, often leading to satisfaction levels above 90%.
improve output qualityDefine the audience
Specify the intended audience in your prompt to tailor responses appropriately. This helps the model adjust the complexity and style of the response to suit different knowledge levels. For example, 'Explain the concept of machine learning to an expert in data science' versus 'Explain the concept of machine learning to a high school student.'
improve output qualityRequest clarity
Frame questions to encourage simple and clear explanations, making complex topics easier to understand. Use phrases like 'Explain quantum computing in simple terms' or 'Explain the theory of relativity to me like I’m 11 years old.' This guides the model to tailor the response to the desired level of complexity.
improve output qualityClarify tasks
Emphasize task importance by using explicit phrases like 'Your task is' and 'You MUST' to direct the model's focus and compliance. For example: 'Your task is to analyze the economic impact of COVID-19. You MUST include data from the last three years.'
improve output qualityIntroduce consequences
Communicate the importance of following instructions by introducing consequences in the prompt, e.g., 'You will be penalized'. This can encourage adherence to format or content requirements. For example, 'If you do not follow the specified format, you will be penalized with a lower score.'
improve output qualityEmphasize human-like responses
Instruct the model to answer in a natural, human-like manner to improve the readability and engagement of outputs. Use phrases like 'Answer a question given in a natural, human-like manner.'
improve output qualityEncourage step-by-step thinking
Promote logical, sequential reasoning by encouraging the model to 'think step by step.' This can improve complex task performance and reasoning quality. For example: 'Explain how to bake a cake, thinking step by step.'
improve reasoningPromote unbiased responses
Ensure fairness and balance in outputs by explicitly instructing the model. Include prompts like 'Ensure that your answer is unbiased and does not rely on stereotypes.' to reduce bias in responses.
reduce hallucinationsFacilitate dialogue
Invite the model to engage interactively by asking clarifying questions before answering. For instance, prompt it to inquire for more details to produce a comprehensive response.
improve output qualityTeach with tests
Use prompts framing a teaching exercise with tests included to deepen understanding of concepts. Instruct not to give answers immediately, encouraging active learning.
improve output qualityRepeat key terms
Repeatedly include important words or phrases in the prompt to emphasize their significance and ensure they receive adequate focus in the response. For example, 'Discuss the importance of sustainability in sustainability practices.'
improve output qualityRequest detailed outputs
Clearly instruct the model to produce in-depth and comprehensive responses by specifying, for example, 'Write a detailed essay on climate change, including all necessary information.'
improve output qualityRevise without changing style
Ask the model to improve grammar and vocabulary in existing text without altering its original tone or style. For instance, 'Revise every paragraph by only enhancing grammar and vocabulary.'
improve output qualityManage complex code prompts
Provide explicit instructions for coding tasks that span multiple files, e.g., 'Whenever you generate code that spans more than one file, create a Python script that can generate or modify the necessary files.'
improve output qualityContinue texts with specific starters
Use a clear starter phrase to extend a given text in a coherent manner. For example, 'I’m providing you with the beginning of a story: ‘Once upon a time in a distant land…’ Finish it based on this.'
improve output qualityState requirements clearly
Explicitly state the requirements for the output, including keywords, regulations, hints, or instructions, to guide the model's response. Example: 'Your response must include three key points about renewable energy: benefits, challenges, and future potential.'
improve output qualityPrompt Compression
Prompt Compression addresses the challenge of reducing the resource consumption involved in using prompts, especially the computational and memory overhead introduced by large, detailed prompts. It involves compressing prompts either in continuous space or discrete space to maintain model performance while lowering the demand on computational resources. This technique is effective in scaling up prompt-based methods for practical applications by making prompts more compact and efficient.
save tokensAsk Open-Ended Questions
Using open-ended questions encourages detailed, expansive, and thoughtful responses from the LLM instead of simple yes/no answers. This technique unlocks the model’s ability to analyze and explore deeper insights into complex topics. Short closed questions limit the quality of output.
improve reasoningAsk for Examples
Requesting the LLM to provide examples improves clarity and helps illustrate complex concepts by making them easier to understand. Examples also make the responses more engaging and accessible. Avoid using ambiguous language, jargon without explanation, or assuming the model’s familiarity with references.
improve output qualityAvoid Ambiguity
Clear and unambiguous language is critical to prevent multiple interpretations by the LLM, ensuring outputs match the user’s intent. Avoid mix-ups of concepts, unclear pronouns, or jargon without explanation. Specifying exact subjects and objects in prompts improves accuracy and relevance of responses.
reduce hallucinationsTailor Prompts to Model's Capabilities
Understand the strengths and weaknesses of the specific LLM to craft prompts that leverage its unique capabilities, such as generating content, summarizing, or explaining. This alignment results in better quality and more relevant outputs. Avoid expecting real-time information from models not designed for it, as this can produce inaccurate results.
improve output qualityBe Concise and Comprehensive
Balance brevity and thoroughness in prompts to include key details without overwhelming the model. Focused and streamlined prompts help the LLM produce detailed but accurate responses, avoiding dilution of the main intent by excessive or scattered information.
improve output qualityDefine Clear Objectives and Desired Outputs
Before formulating prompts, it is crucial to define clear objectives and specify the desired outputs. By clearly articulating the task requirements, prompts can guide LLMs to generate responses that meet expectations. This principle emphasizes clarity in the prompt's goal to enhance model performance.
improve output qualityTailor Prompts to Specific Tasks and Domains
Different tasks and domains require tailored prompts to achieve optimal results. Customizing prompts to the specific task or domain provides LLMs with necessary context and improves their understanding. This technique enables more accurate and relevant responses by considering domain-specific nuances.
improve output qualityUtilize Contextual Information in Prompts
Incorporating relevant contextual information such as keywords, domain-specific terminology, or situational descriptions anchors the model's responses in the correct context. This enhances the quality and relevance of generated outputs by enabling the LLM to better understand the prompt environment.
improve output qualityIncorporate Domain-Specific Knowledge
Leveraging domain expertise by embedding relevant knowledge into prompts guides LLMs to generate responses aligned with specific domain requirements. This ensures that outputs are relevant and accurate within specialized contexts.
improve output qualityExperiment with Different Prompt Formats
Exploring variations in prompt structure, wording, and formatting helps identify the most effective approach for a given task. Experimentation can optimize LLM performance by discovering the format that elicits the best responses.
improve output qualityOptimize Prompt Length and Complexity
Striking a balance between providing sufficient information and avoiding overwhelming the model is critical. Optimizing the length and complexity of prompts improves the model’s understanding and generates more accurate responses.
improve output qualityConsider the Target Audience and User Experience
Tailoring prompts to the intended audience ensures relevant and meaningful responses. Considering user experience allows creation of intuitive, user-friendly prompts that align with user expectations.
improve output qualityLeverage Pretrained Models and Transfer Learning
Utilizing knowledge and capabilities of pre-trained models can enhance LLM performance with minimal additional training. Transfer learning allows applying learned features from one domain to another, improving prompt effectiveness.
improve output qualityFine-Tune Prompts for Improved Performance
Iteratively refining prompts based on model outputs and human feedback optimizes performance. This ongoing adjustment helps produce better results tailored to specific needs.
improve output qualityRegularly Evaluate and Refine Prompts
Prompt evaluation and refinement is a continuous process involving assessment and incorporation of user feedback. This maintains high-quality outputs and adapts prompts to changing requirements.
improve output qualityCollaborate and Share Insights with the Community
Collaboration enhances knowledge sharing and collective advancement. Practitioners exchange experiences, improving prompt engineering techniques across the field.
improve output qualityMonitor and Adapt to Model Updates and Changes
Prompt strategies should evolve with LLM updates to maintain effectiveness. Monitoring changes ensures prompts continue to perform optimally over time.
improve output qualityIncorporate User Feedback and Iterative Design
Use user feedback to iteratively improve prompts aligning with user preferences. This enhances relevance and user satisfaction with generated responses.
improve output qualityUnderstand the Limitations and Risks of Prompting
Recognize that poorly designed prompts can cause biases or inaccuracies. Conduct thorough evaluation and incorporate fairness and bias mitigation to ensure reliability of LLM outputs.
reduce hallucinationsStay Updated with Latest Research and Developments
Engage actively with current research, blog posts, and industry trends in prompt engineering to adopt cutting-edge techniques and best practices, ensuring state-of-the-art results.
improve output qualityFoster Collaboration between Researchers and Practitioners
Promote knowledge sharing between academia and industry to exchange real-world insights and research findings. This collaboration drives innovation and advances prompt engineering methodologies.
improve output qualityZero-shot chain-of-thought prompting
This technique triggers reasoning in LLMs without providing example completions by appending the phrase "let's think step by step" to the prompt. It involves a two-stage prompting process to elicit and then conclude reasoning. While it outperforms standard zero-shot prompts, it is generally less effective than few-shot CoT.
improve reasoningEntropy-based prompt ordering
This technique addresses order sensitivity by using an entropy-based probing method to generate the optimal ordering of few-shot examples in the prompt. It aims to reduce variance and improve performance without the need for a development dataset. The method is shown to work across different model types and prompt templates.
improve output qualityCalibration technique to mitigate prompt biases
This technique counteracts biases in few-shot prompting such as majority label bias, recency bias, and common token bias by applying a calibration process to the language model. The approach reduces variance and can provide significant accuracy improvements in few-shot learning tasks.
reduce hallucinationsDeclarative and direct signifiers in prompts
This principle recommends using clear and explicit task descriptors like 'translate' or 'rephrase this paragraph' to clearly communicate the intended task to the model. Such explicitness helps LLMs better identify and execute the desired task effectively.
improve output qualityUse of task-specific few-shot demonstrations
When tasks require specific output formats, providing few-shot examples tailored to those formats helps the LLM interpret the task accurately. Models may interpret the few-shot examples holistically, so careful example selection is important.
improve output qualityCharacter or situational proxies for task specification
Use characters (e.g., Gandhi, Nietzsche) or characteristic situations within prompts as proxies for the task's intention. This leverages LLMs’ understanding of analogies to guide the generation in the desired style or perspective.
improve output qualityLexical and syntactic constraints in prompts
Constraining output by carefully crafting prompt syntax and lexical choices, such as specifying 'sentence' in translation tasks or using quotes, helps the model produce outputs within desired boundaries, enhancing precision and control.
improve output qualityStep-by-step reasoning encouragement
Encourage the model to decompose complex problems into subproblems via prompts that request step-by-step reasoning. This improves reasoning performance by guiding the model through a logical process.
improve reasoningGrammar and stylistic quality in inputs
Providing grammatically correct and stylistically consistent inputs helps maintain output quality since LLMs preserve stylistic features in their completions. This principle emphasizes careful input preparation.
improve output qualitySingle item generation repeated for list creation
Instead of asking the model to generate a list of N items in one go, generate a single item N times. This avoids the model getting stuck in repetitive loops and improves stability.
improve output qualityGenerate multiple completions and heuristic ranking
Generate many candidate outputs for a prompt and then rank them heuristically to select the best one. This technique improves overall output quality by leveraging diversity and heuristics.
improve output qualityPrompt reframing techniques
Reframe prompts by using low-level patterns from examples, explicitly itemizing instructions into bulleted lists, changing negative instructions into positive ones, breaking down tasks into sub-tasks, and avoiding generic statements. These changes make prompts easier for LLMs to understand and improve task-specific performance.
improve output qualityGradient-guided automated prompt generation
An automated technique that uses gradient-guided search to produce effective prompts as trigger tokens. Applied to masked language models, this method achieves strong performance and can outperform fine-tuned models in low-data settings.
improve output qualityMining and paraphrasing for optimal prompt generation
Automated approach using mining and paraphrasing techniques to generate optimized prompts for masked language models. This method boosts accuracy in relational knowledge extraction tasks by about 10%.
improve output qualityPrefix tuning
A method using learned continuous vectors ('prefixes') prepended to the input tokens of a generative model, while keeping other model parameters fixed. Prefix tuning can outperform fully fine-tuned large models with far fewer parameters tuned, achieving strong performance in full and low-data scenarios.
improve output qualityPromptSource - Prompt engineering IDE
An integrated development environment designed to systematize and crowdsource best practices for prompt engineering. It provides a templating language for defining data-linked prompts and tools for prompt management, facilitating prompt design and reuse.
improve output qualityPromptIDE - Visual prompt experimentation platform
A visual platform to experiment with prompt variations, track their performance, and iteratively optimize prompts. It provides an interactive interface suitable for refining prompt designs.
improve output qualityPromptChainer - Multi-step LLM applications design tool
A tool to design complex applications involving multiple LLM prompt executions chained together, including API calls and user inputs. It offers a Webflow-like interface for constructing sophisticated multi-step prompt workflows.
improve output qualityAdjust the style and tone
You can specify the desired style and tone in the prompt to influence the output's formality, technicality, or creativity. This helps tailor outputs for different audiences or purposes.
improve output qualityExperiment with different phrasings
Trying different wordings for the same input can improve outcomes, as changes affect how the model interprets the prompt and the information it generates.
improve output qualityCustomize LLMs (fine-tuning/customization)
Many LLMs allow customization through user-defined instructions or fine-tuning to tailor responses better to your specific tasks and needs, enhancing relevance and accuracy.
improve output qualityText-based Prompting Techniques
This category encompasses 58 prompting techniques that instruct Large Language Models using text input only. These techniques are designed to optimize the way text prompts are presented to the model to improve its task completion accuracy and reliability. The specifics of each technique in this category are compiled systematically in the referenced dataset and survey paper.
improve output qualityAnswer Engineering
Answer engineering is concerned with designing the output format of an LLM, involving three main decisions: the answer shape, the answer space, and the extractor method. For example, in a classification task, the answer space might be limited to specific tokens like 'positive' or 'negative', and the extractor maps the model's text output to these specific answers. This technique aims to improve the reliability and interpretability of generated answers by constraining output properly.
improve output qualityTree-of-Thought (ToT) Reasoning
Tree-of-Thought reasoning extends Chain-of-Thought methods by exploring multiple reasoning paths in a tree-like structure rather than a single linear chain. It involves branching, scoring, and pruning of paths, which helps in combinatorial and planning tasks. This structured exploration improves robustness by selecting the optimal reasoning path based on evaluation criteria.
improve reasoningNeuro-Symbolic Hybrid Models
These models integrate neural networks with symbolic AI systems, combining data-driven pattern recognition and rule-based logical reasoning. The hybrid approach enhances interpretability and robustness by leveraging symbolic logic for explicit inference while using neural modules for feature extraction from unstructured data.
improve reasoning and interpretabilityMemory-Augmented Neural Networks (MANNs)
MANNs enhance neural models with an explicit external memory component alongside a controller network. This architecture supports dynamic reading and writing to memory, facilitating reasoning consistency over long contexts, lifelong learning, and few-shot adaptation. Differentiable memory mechanisms allow gradient-based training to improve reasoning performance.
improve reasoning consistencyGraph Neural Networks (GNNs) and Knowledge Graphs
GNNs operate on graph-structured data representing entities and their relationships, enabling structured reasoning and multi-hop inference. When combined with knowledge graphs, GNNs facilitate logical reasoning by traversing and learning over graph topologies, improving explainability and inference capabilities in LLMs.
structured reasoning and explainabilityTool-Use and API Augmentations
LLMs can be augmented with external tools or APIs such as calculators, search engines, or databases to enhance reasoning capabilities. This enables programmatic reasoning and access to up-to-date, dynamic data, improving factual accuracy and computational power. However, reliance on external services introduces latency and complexity in integration and control.
improve reasoning with external toolsAutomated Verifiers and Critic Models
Automated verifiers use separate models or formal proof systems to critically assess and validate reasoning outputs of LLMs. This approach helps identify and filter out incorrect inferences and supports formal logic verification, enhancing reasoning reliability. However, integrating formal proof checking with natural language remains challenging.
improve reasoning accuracy and verificationEncoding
Encoding methods compress long textual prompts into compact vector representations that are more accessible to LLMs and reduce inference costs. Techniques include tuning language models to generate summary vectors or 'nuggets' that capture essential semantic and contextual information while reducing length and complexity for efficient prompting.
prompt compressionFiltering
Filtering involves evaluating information entropy of lexical units in a prompt and removing redundant or less useful content to shorten prompt length, improve efficiency, and retain key information for LLM comprehension. This text-to-text level compression is model-agnostic and helpful when computational resources are limited or for closed-source LLMs.
prompt compressionReal-gradient tuning
Real-gradient tuning applies gradient-based optimization methods to prompt tuning by mapping discrete prompts into continuous embedding space, enabling the use of gradient descent. Techniques like AutoPrompt iteratively optimize trigger tokens via gradients to generate effective discrete prompts for downstream tasks in open-source models.
automatic prompt optimizationImitated-gradient prompting
Imitated-gradient prompting uses LLMs to simulate or imitate gradient-based optimization for prompt design when true gradients are unavailable, such as in closed-source black-box models. Methods generate candidate prompts, score them, and iteratively improve prompts by symbolic editing operations or natural language-based gradient instructions.
automatic prompt optimizationEvolution-based methods
Evolution-based prompting treats prompt design as a discrete optimization problem solved via evolutionary algorithms such as genetic algorithms or differential evolution. LLMs act as operators to crossover, mutate, and select prompts to iteratively improve task performance, enabling black-box optimization without explicit gradients.
automatic prompt optimizationFaithful Chain-of-Thought (CoT) Reasoning
Faithful Chain-of-Thought Reasoning ensures that the reasoning chain generated by the LLM truly reflects the path to the answer. This increases trust and interpretability by guaranteeing that the final answers derive directly from the explicit reasoning steps, enhancing transparency and reliability in model outputs.
reduce hallucinationsPrompt Engineering with Taxonomy-Based Survey
This technique involves categorizing and analyzing various prompting strategies based on a comprehensive taxonomy to identify their strengths and weaknesses. It provides systematic insights into prompt design and aids in choosing appropriate tactics for different tasks. It is more of a survey-based approach than a direct prompting method, serving as a guide for effective prompt construction.
improve output qualityLearning from Contrastive Prompts (LCP)
LCP generates multiple prompt candidates and evaluates outputs to identify successful and unsuccessful prompts. It contrasts good and bad examples to understand what works and refines prompts iteratively. This method reduces overfitting by summarizing failure reasons and exploring diverse prompts, leading to better optimization and adaptable prompts.
improve output qualityPrompt Agent
Prompt Agent models prompt generation as a planning problem focusing on integrating subject matter expertise. It starts with an initial prompt, evaluates outputs, refines prompts with expert-level feedback, and expands the prompt space in a tree structure to prioritize high-reward paths. This approach uses self-reflection and error analysis to achieve accurate and adaptive prompt engineering.
improve output qualityTEXTGRAD (Textual Gradient-Based Optimization)
TEXTGRAD iteratively refines prompts using natural language feedback termed 'textual gradients.' One LLM or a human evaluator reviews generated outputs and provides nuanced textual feedback, which another LLM uses to generate improved prompt versions. This process continues until desired output quality is met, allowing flexible and detailed prompt optimization particularly suited for creative and detailed tasks.
improve output qualityVote-K Prompting
A Few-Shot prompting method selecting candidate exemplars that are different and representative to balance diversity and relevancy for improved performance.
improve output qualityTabular Chain-of-Thought (Tab-CoT)
Outputs reasoning steps as a markdown table in zero-shot CoT prompting, improving output structure and reasoning performance.
improve reasoningFew-Shot Chain-of-Thought (Few-Shot CoT)
Presents the model with multiple exemplars including chains-of-thought to significantly enhance reasoning and performance.
improve reasoningTree-of-Thought (ToT)
Creates a tree search of possible reasoning steps (thoughts), evaluates progress per step and chooses promising paths, enabling efficient search and improved problem-solving in complex tasks.
improve reasoningMixture of Reasoning Experts (MoRE)
Uses multiple specialized prompts for different reasoning types, selects best answer by agreement score to improve multi-faceted reasoning task performance.
improve output qualityMax Mutual Information Method
Generates multiple varied prompt templates and selects the one maximizing mutual information between prompts and LLM outputs for optimal prompting.
improve output qualityMeta-Reasoning over Multiple CoTs
Generates multiple reasoning chains combined in a prompt to produce a final answer by prompting the LLM, enhancing reasoning quality.
improve reasoningDiVeRSe
Generates multiple prompts, performs Self-Consistency in each, scores reasoning paths stepwise, then selects final response, balancing diversity and quality.
improve reasoningUniversal Self-Adaptive Prompting (USP)
A generalization of COSP that uses unlabeled data with complex scoring to select exemplars without Self-Consistency, aiming for broad applicability.
improve reasoningPrompt Paraphrasing
Generates new prompt variants that maintain original meaning to augment data for ensemble methods, improving robustness and performance.
improve output qualitySeparate LLM Answer Extractor
When answer extraction is complex, a separate LLM is used to interpret model output and extract the final answer reliably.
improve output qualityTree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT) Prompting
These techniques extend Chain-of-Thought prompting by expanding reasoning beyond linear sequences to tree or graph structures, allowing exploration of multiple logical paths. ToT enables decision-tree-like brainstorming, while GoT models thoughts as graphs for dynamic reasoning. These methods enhance creativity and decision-making capacity for complex problems.
improve reasoningSocratic Prompting
Socratic Prompting uses a series of questions designed to lead the model (or user) to a conclusion through inquiry, emulating the Socratic method. This technique helps explore the depth of knowledge and reasoning abilities by structured questioning.
improve reasoningPrompt Macros and End-goal Planning
This technique uses pre-defined prompt templates that combine multiple micro-queries into a single macro prompt to guide the model towards overarching goals. It balances the breadth and specificity of requests to produce coherent, multi-faceted outputs in one interaction.
improve output qualityGrammar Correction
Grammar correction prompting instructs the model to detect and fix grammatical errors or inconsistencies in a user's text, effectively acting as a conversational grammar checker. It improves writing quality and clarity according to desired style and context.
improve output qualityReasoning WithOut Observation (ReWOO)
ReWOO detaches reasoning processes from external observations to reduce token consumption and improve efficiency. It divides workflow into Planner, Worker, and Solver modules, allowing modular retrieval and synthesis of knowledge. ReWOO outperforms baseline methods on several NLP benchmarks, improves token efficiency and accuracy, and shows robustness under tool-failure scenarios.
reduce hallucinationsReason and Act (ReAct)
ReAct combines verbal reasoning and acting by prompting LLMs to generate reasoning traces alongside action outputs, enabling dynamic reasoning, planning, and interaction with external environments. It improves performance and robustness in multi-task evaluations including question answering, fact verification, text games, and web navigation.
improve reasoningAutomatic Multi-step Reasoning and Tool-use (ART)
ART is a framework using frozen LLMs to automatically generate intermediate reasoning steps and use external tools for complex tasks. It selects demonstrations of multistep reasoning and tool usage from libraries to decompose tasks and integrate tool outputs, leading to significant improvements on natural language inference, question answering, and code generation tasks, outperforming prior few-shot prompting methods.
improve reasoningPrompt Engineering with DSPy
DSPy (Declarative Standard Prompting) automates and standardizes prompt creation through declarative definitions, rules, and templates. It simplifies prompt engineering in complex workflows by generating optimized prompts automatically, improving consistency and scalability across teams.
improve output qualityLogical Chain-of-Thought (LogiCoT) Prompting
Logical Chain-of-Thought (LogiCoT) prompting integrates symbolic logic principles into the reasoning process, using a think-verify-revise loop to verify each reasoning step with reductio ad absurdum and provide targeted feedback. This neurosymbolic framework reduces logical errors and hallucinations, improving coherence and accuracy on complex multi-step problems.
improve reasoningSystem 2 Attention (S2A) Prompting
System 2 Attention (S2A) prompting enhances attention mechanisms in transformers by having the model regenerate input contexts to selectively attend to relevant parts, improving factual accuracy and objectivity in generated responses. This two-step process improves response quality across diverse tasks like factual QA and long-form generation.
improve output quality; reduce hallucinationsScratchpad Prompting
Scratchpad Prompting allows the model to generate a sequence of intermediate tokens as 'scratchpad' before producing the final output, facilitating multi-step algorithmic reasoning without modifying model architecture. This method boosts success rates on programming tasks and complex calculations but is limited by fixed context size and reliance on supervised training for scratchpad utilization.
code generation and executionStyle unbundling
Style unbundling breaks down the key elements of an expert’s style or skill into discrete components, rather than simple imitation. The AI is prompted to analyze and list the specific characteristics that define a person's approach, which can then be used to guide new content generation. This enables nuanced application of style with greater control over what aspects are emphasized.
improve output qualitySynthetic bootstrap
Synthetic bootstrap uses AI to generate multiple examples based on given inputs, which can serve as in-context learning data for subsequent prompts or as test inputs. This technique is valuable when real examples are scarce, enabling quick creation of diverse and realistic input sets to improve AI model performance without expert data.
improve output qualitySelf-consistency in Chain of Thought
Self-consistency improves chain of thought reasoning by sampling multiple reasoning paths and aggregating the final answers to increase answer robustness and quality. This technique offers an easy performance boost and enhances robustness but at the cost of increased computational resources and latency. It helps reduce errors in chain of thought outputs by encouraging consistent answers across multiple samples.
improve reasoningDARE Prompt
DARE (Determine Appropriate Response) prompting adds a mission and vision statement context before the user prompt to reduce hallucinations and improve alignment with the intended behavior. This technique involves explicitly instructing the language model about its role and mission to encourage responses that comply with the specified objectives and to refuse inappropriate questions.
reduce hallucinationsPrompt Temperature Modulation
Modulating the temperature parameter controls randomness in LLM outputs by adjusting the sampling distribution of tokens. Lower temperatures make output more deterministic and focused on the most likely tokens, suitable for factual or deterministic tasks. Higher temperatures increase randomness and creativity, useful for creative or open-ended tasks. Proper tuning of temperature enhances the relevance and appropriateness of generated responses.
improve output qualityNatural Language for Reasoning in Chain-of-Thought Prompting
This technique encourages writing the reasoning process in natural, conversational language, mimicking how one explains a problem to another person. Avoiding overly concise or formulaic prompts helps LLMs generate rich and understandable reasoning steps. Writing in natural language format improves clarity and effectiveness of the reasoning process in the model’s output.
improve reasoningPrompt Ordering for Defense
Ordering the prompt text to give reasoning first and answer last can protect against prompt injection attacks or unwanted responses. Changing the order in which instructions and user input appear can help ensure that critical instructions are obeyed before processing user queries. This technique is helpful for security and reducing unintended behaviors by LLMs.
reduce hallucinationsDetailed Descriptions for Tables in Prompts
Including detailed descriptions for each intent, class, or table when working with tabular data in prompts enhances LLM accuracy. Providing multi-line descriptors with rich explanations rather than brief labels guides the model to understand the table structure and intent better. This approach improves classification, entity extraction, and structured data tasks.
improve output qualityStructured Text Instead of Wall of Text
Providing structured text (e.g., JSON with rules or lists) rather than long unstructured text in prompts leads to better quality and consistency in LLM outputs. Structuring instructions and inputs helps the model parse and apply rules more effectively, producing clearer and more reliable responses. It is particularly useful when generating output like SQL or rule-based responses.
improve output qualityInclude Complete Prompts in Fine-Tuning
When fine-tuning LLMs, include the full prompt context and input text in the training data to help the model learn how to handle the prompt structure during inference. This prevents the model from failing to interpret partial inputs and improves its ability to generate correct responses given similar prompt formats at deployment.
improve output qualityResponsible AI and Safety Filters
Incorporate safety filters and responsible AI principles when generating content with LLMs. For example, using predefined harm categories and block thresholds to prevent abusive, hateful, sexually explicit or dangerous content. Applying these filters protects users and ensures compliance with ethical guidelines while interacting with generated content.
reduce hallucinationsOutput Automator Pattern
This method instructs the LLM to produce executable artifacts, such as scripts or data formats, that automate a sequence of steps in a specific task. It combines natural language instructions with automation outputs, facilitating tasks like generating shopping lists from meal plans or calendar events from schedules. This approach enhances efficiency and integration with external tools.
improve output qualityVisualization Generator Pattern
It involves prompting the LLM to generate code or data inputs suitable for visualization tools based on discussed data. This enables easy creation of plots, charts, or graphs to visually represent information such as statistical data or mathematical functions. It extends the model's capability to communicate data insights interactively.
improve output qualityRecipe Pattern
This pattern prompts the model to provide a complete and sequenced set of steps to achieve a goal, including filling in missing steps and optionally identifying unnecessary ones. It can be used for generating detailed instructions, itineraries, or workflows, ensuring comprehensive guidance for tasks.
improve output qualityTemplate Pattern
The Template pattern involves providing the LLM with a fixed format or structure and asking the model to fit outputs within specified placeholders. This preserves formatting and aids in generating consistently structured responses, facilitating easier post-processing or display.
improve output qualityTail Generation Pattern
This technique directs the model to append specific content at the end of the output such as disclaimers, summaries, or questions to prompt user engagement. It is useful for concluding responses with important notes or calls to action.
improve output qualityMeta Language Creation Pattern
This technique defines or remaps language tokens within the prompt to shorthand or code for longer instructions. It allows users to create custom vocabulary or commands to simplify interactions and standardize requests to the model.
improve output qualityMenu Actions Pattern
This pattern configures the LLM to recognize specific typed commands and perform associated actions, optionally handling multiple commands and interacting continuously by asking for the next user action. It simulates menu-driven input and programmatic command execution.
improve output qualityAlternative Approaches Pattern
This pattern has the model list alternative ways to accomplish a provided task, optionally comparing pros and cons, including the original approach, and prompting the user to choose among them. It aids in exploring different solutions and making informed choices.
improve reasoningAsk for Input Pattern
This interaction pattern instructs the LLM to prompt the user for specific inputs, often to guide the subsequent generation or to choose among options. It facilitates dynamic user-model interaction capturing preferences or additional details.
improve output qualityFlipped Interaction Pattern
In this prompting strategy, the model asks the user questions repeatedly to achieve a specified goal or condition, optionally in batches. It simulates a dialogic approach to information gathering or learning, improving engagement and model understanding.
improve output qualityInfinite Generation Pattern
This technique prompts the model to generate output continuously, producing a defined number of outputs at a time until stopped. It supports tasks like idea generation, storytelling, or recipe creation without a fixed endpoint.
improve output qualityContext Manager Pattern
This context control technique involves instructing the LLM to only consider specified scopes, include or exclude particular information, and optionally reset context. It helps manage focus and relevance of model responses.
improve output qualitySemantic Filter Pattern
This pattern prompts the model to filter information to remove or include specific content, such as excluding mentions of particular topics or sensitive information. It is useful for content moderation or focusing outputs.
reduce hallucinationsFact Check List Pattern
This error identification pattern instructs the LLM to generate a list of fundamental facts contained in the output that could undermine its veracity if incorrect. The fact list is inserted at a specified position for aiding validation and verification.
reduce hallucinationsReflection Pattern
Upon generating an answer, the model explains its reasoning and assumptions, optionally allowing the user to improve the question. This pattern increases transparency and trustworthiness of model outputs.
reduce hallucinationsTransformers
Transformers are a prominent architecture for Large Language Models (LLMs) such as GPT and BERT. They use self-attention mechanisms to process input data in parallel, capturing context effectively to generate coherent and contextually relevant text.
improve output qualityRecurrent Neural Networks (RNNs)
RNNs are older neural network architectures designed to remember past inputs in their internal state, which allows them to process sequences of data. LSTM (Long Short-Term Memory) is a popular type of RNN that addresses the vanishing gradient problem, making them capable of learning long-term dependencies.
improve reasoningChain of Density (CoD)
Chain of Density is a method to generate short but dense summaries where every word adds significant value. It employs a series of iterative summaries starting from a prompt where the AI is instructed to incrementally improve the density of a summary by incorporating novel and relevant entities at each step. While called a chain, it processes outputs sequentially with a single initial prompt, enhancing summarization quality for dense information consolidation.
improve output qualityVerbalized Confidence
Verbalized confidence is a validation technique where the LLM is asked to state how confident it is in its answer. This response is used as a metric for output quality. While simple, it suffers from bias as LLMs tend to overstate their confidence. Calibration with advanced prompt techniques can improve reliability temporarily, but it remains an imperfect validation metric.
validationUncertainty-Based Validation
This technique derives a measure of uncertainty by analyzing the diversity (non-unique answers) among multiple LLM outputs to the same prompt. A high number of differing answers indicates greater uncertainty or disagreement within the model about the correct response. This provides a complementary perspective to majority vote approaches and can be combined with self-consistency methods for enhanced validation.
validationChain of Verification (CoVe)
Chain of Verification employs LLMs to plan and verify information validity before final answering. The model generates verification questions to confirm the truthfulness of generated content, mimicking human fact-checking processes. This technique is especially critical in high-stakes contexts and can be integrated with Retrieval Augmented Generation (RAG) systems to verify real-time information against multiple sources, enhancing output reliability.
validationReWOO (Reasoning Without Observation)
ReWOO enhances efficiency by decoupling reasoning from external observations and partitioning the process into Planner, Worker, and Solver modules. The Planner designs a set of plans that the Worker executes sequentially and the Solver analyses results. It reduces autonomous adjustments during execution compared to ReAct, lowering token consumption by about 64% and increasing accuracy, and is more robust against tool failures. It also supports heterogenous LLMs for different modules based on complexity.
improve efficiencyPrompting Technique Categorization
The paper provides a comprehensive categorization of prompting techniques for Large Language Models. It offers a standardized, interdisciplinary framework dividing techniques into seven distinct categories, aiming to clarify their unique contributions and applications. This classification helps practitioners effectively design prompts tailored to their domains.
improve output qualityPrompt Transferability
Prompt transferability refers to the ability to reuse prompts developed for one task or model on other tasks or models, improving efficiency by reducing prompt re-design efforts. It explores how well prompts generalize across different contexts or domains.
improve output qualityCross-Modality Prompt Transfer
A framework where prompts trained on data-rich source modalities are transferred to target tasks in different, data-scarce modalities. This enables efficient adaptation and broadens applicability of prompt tuning beyond single modalities.
improve output qualityEdgePrompt: Engineering Guardrail Techniques for Offline LLMs
Techniques designed to implement guardrails in offline LLM deployments, especially in sensitive applications like K-12 education, to ensure safer and controlled model behavior without real-time monitoring.
reduce hallucinationsConcept for Integrating an LLM-Based Natural Language Interface for Visualizations Grammars
A technique aimed at combining LLMs with visualization grammars via natural language interfaces to generate visualizations from textual prompts effectively.
improve output qualityLLM Shots: Best Fired at System or User Prompts?
A technique investigating whether it is more effective to include few-shot examples in system prompts or user prompts to improve LLM performance on specific tasks.
improve output qualityLeveraging Prompt Engineering with Lightweight Large Language Model for Clinical Information Extraction
Combining prompt engineering with lightweight LLMs to label and extract clinical information accurately from medical reports such as radiology reports, enabling domain-specific applications.
improve output qualityFrom Tables to Triples: A Prompt Engineering Approach
A technique focusing on transforming tabular data into triples (structured knowledge graph form) using specifically engineered prompts to guide the LLMs.
improve output qualityTokenization
Tokenization involves dividing input text into smaller units called tokens, which the model processes to understand the input. Knowing the token limit helps in crafting prompts that fit within the model's processing capabilities, ensuring efficient and effective interaction.
improve output qualityGuided Responses
After tokenization, the model predicts the next token based on previously seen tokens, using learned patterns from training data. This probabilistic prediction is influenced by logits, which are raw scores transformed into probabilities via the softmax function, guiding the model's response generation.
improve output qualityTuning the Output with Parameters
This technique involves adjusting parameters to control the model output. It includes setting token limits to manage length, using stop words to end generation, tuning temperature to balance predictability and creativity, and applying top-k and top-p sampling for randomness control. Beam search width influences the number of candidate outputs considered to optimize quality.
improve output qualityConstructing the Prompt
Crafting prompts clearly and explicitly to guide the model effectively. This includes being direct, providing context or background, specifying the desired format (e.g., bullet points), and indicating language style or tone, which helps in obtaining accurate and structured responses.
improve output qualityIterative Approach
Prompt engineering as a continuous refinement process where initial prompts are adjusted based on the quality of responses. This can include using generated knowledge as context in iterative passes to enhance relevance and accuracy of outputs.
improve output qualityOptimizing Inference Time Techniques
A collection of methods to reduce computational resource needs and improve the speed of model inference, including model pruning to remove non-essential parameters, quantization to use lower precision formats, model distillation to create smaller efficient models, optimized hardware deployment, and batch inference to improve throughput.
reduce resource usageIterative Refinement for LLM Optimization
An iterative strategy to improve the LLM's performance by evaluating initial outputs for relevance and accuracy, gathering user feedback, refining prompts, and tuning parameters such as temperature and beam width. This continuous loop ensures enhanced quality and alignment with user needs.
improve output qualityLower Precision
Operating models at reduced numerical precision levels like 8-bit or 4-bit to gain computational efficiency without substantially sacrificing model performance.
reduce resource usageModel Distillation
Training a smaller student model to mimic a larger teacher model's behavior, resulting in a more computationally efficient model that maintains comparable performance levels.
reduce resource usageDynamic Quantization
Post-training conversion of neural network weights to lower precision formats like INT8 to reduce memory use and computational cost during inference, enhancing speed and hardware efficiency.
reduce resource usagePruning
Reducing model size by removing less important parameters, which decreases computational requirements and increases efficiency without significantly harming performance.
reduce resource usageBrowbeating Prompts
This technique involves crafting prompts that intentionally 'browbeat' or pressure the AI, pushing it to produce more assertive or extreme responses. The goal is to force the AI out of a neutral stance into a more definitive or bold stance, though it must be used cautiously to avoid generating undesirable content.
improve output qualityCatalogs Or Frameworks For Prompting
Utilizing prompt catalogs or frameworks involves categorizing and systematizing different prompting patterns or styles. This technique helps users select appropriate prompt templates based on their goal, to standardize and optimize prompt creation.
save tokens, improve output qualityCertainty And Uncertainty Prompting
This technique involves explicitly requesting the AI to express a level of certainty or doubt in its responses. It can be used to gauge the confidence of the AI in its answers and to manage the variability or reliability of responses.
improve reasoning, add trustworthinessChain-of-Density (CoD) Prompting
A technique to enhance summaries or content condensation by instructing AI to be dense and comprehensive in its coverage, often via sequential prompts or conditioning. It aims to produce highly informative and packed summaries.
improve output qualityChain-of-Feedback (CoF) Prompting
An extension of Chain-of-Thought prompting, where the AI is asked to generate a series of sub-questions and their answers, which can help reduce hallucinations and increase answer validity.
reduce hallucinationsGenerating Prompts Via Generative AI
Using AI itself to craft effective prompts for future queries, leveraging its language understanding to optimize prompt design.
generate better promptsImporting Text As Prompting Skill
Involves carefully framing prompts to import and utilize external text effectively within the AI response, often transforming static text into an active prompt component.
import data for analysisMacros In Prompts
Embedding reusable macro patterns within prompts to streamline and standardize prompt creation. This is similar to macros in spreadsheets, where a macro expands into a predefined text or command sequence.
save tokens, improve efficiencySinister Prompting
Design prompts that encourage the AI to produce malicious or harmful content, often used maliciously or to test AI boundaries. Its usage is discouraged and potentially illegal.
prohibited and unethicalStar Trek Trekkie Lingo Prompting
Using Star Trek terminology and lingo in prompts to influence the AI to generate results with a flavor or style reminiscent of Star Trek, which sometimes improves output quality.
style inspiration and engagementStep-Around Prompting Technique
Design prompts to circumvent filters or restrictions by phrasing questions or commands in a way that bypasses restrictions but still achieves the goal.
bypass restrictionsTrust Layers For Prompting
External components or systems that validate, verify, or provide trust signals to the AI prompts or responses, to increase reliability and safety.
increase trustworthinessVagueness Prompting
Intentionally using vague wording or prompts to encourage more open-ended, exploratory, or creative responses from the AI, which can lead to novel outputs.
encourage creativityResponse format specification
Instructs the AI to follow a particular response structure, such as bullet points, numbered lists, or specific styles, to make the output more usable and aligned with user needs.
improve output qualitySetting output constraints
Specifying limits such as word count, tone, or complexity to tailor the responses according to specific requirements.
improve output qualityIterative optimization of prompts
Testing and refining prompts based on output to improve performance over time, involving feedback and experiments.
save tokens, improve output qualityAvoiding bias and ambiguity
Designing prompts using neutral language and ethical considerations to minimize biased or misleading responses from the AI.
reduce hallucinationsBe Direct / Assign a Task
This technique emphasizes giving clear and direct instructions to ChatGPT, specifying exactly what task to accomplish. It involves starting prompts with phrases like 'Your task is to...' to set a clear expectation and guide the model's response effectively.
improve output qualityProvide Context / More Is More
This approach involves giving detailed context and background information in the prompt to help ChatGPT generate more accurate, relevant, and nuanced responses. The more detailed the prompt, the better the output, provided it doesn't contradict itself.
improve reasoning and output detailFact-Check / Verify Sources
Since ChatGPT can hallucinate, this technique recommends explicitly instructing the model to cite sources and then verifying those sources externally. It enhances accuracy and trustworthiness, especially for high-stakes applications.
reduce hallucinationsSpecify Output Format
Explicitly stating the format of the output (list, table, JSON, etc.) ensures the response is structured as needed. It is highly effective in extracting organized and easily parseable data.
save tokens, improve output formatsState What Not To Include / Exclude
This technique involves instructing the model to omit certain information or avoid specific styles. It helps in reducing unwanted elements and tailoring the output to specific needs.
reduce hallucinations, tailor contentIterative / Multiple Runs / 80/20 Rule
This approach suggests running the same prompt multiple times to pick the best response, leveraging the 80/20 rule. It improves output quality through iteration and selection.
improve reasoning, obtain best resultExperiment / Tweak / Play
Encourages trying variations, changing prompts, and refining instructions through experimentation. It’s a fundamental prompting technique for discovering what works best.
improve output quality, discover effective promptsReAct (Reason & Act)
Combine natural-language reasoning with external tools (search, code execution, etc.) in a thought–action loop, enabling the model to fetch information or run code mid-prompt.
improve reasoning and output qualityUtilizing system messages in chat models
System messages are used in chat models to set the behavior or tone of the conversation, guiding the model's responses throughout the interaction.
guide model behaviorAdd Specific, Descriptive Instructions
This technique involves providing the model with clear and detailed instructions to guide its response. It helps to reduce vague answers and improve the relevance of the output. Using a cheat sheet with prompts like persona, output format, tone, and edge cases is recommended.
improve output qualityDefine the Output Format
Specifying a structured or specific output format helps the model produce responses that are easier to parse and utilize programmatically, such as JSON, XML, or custom formats. It is particularly useful for API usage where response components need separation.
improve output parsingAdd a Data Context (RAG)
Retrieve relevant data from external sources such as documents, databases, or structured systems and include it in the prompt as context. This retrieval-augmented generation (RAG) helps produce more accurate and organization-specific answers.
provide organization-specific informationInclude Conversation History
Maintain and include previous dialog exchanges in the prompt to give the model context for ongoing interactions. This improves responses in multi-turn conversations and makes the assistant appear more coherent.
dialog coherence and context awarenessFormat the Prompt: Use Clear Headlines Labels and Delimiters
Organize the prompt with clear sections, labels, and delimiters to help the model distinguish between instructions, data, examples, and user input. This clarity improves the quality and relevance of responses.
prompt clarity and structureBringing it All Together: The Anatomy of a Prompt
Combine all the above techniques—clear instructions, examples, context, format, history—into a comprehensive prompt structure for advanced prompt engineering.
comprehensive prompt designBonus: Multiprompt Approach
Use multiple prompts sequentially, such as classifying the user query into categories before providing specific responses or tools. This approach helps in managing complex applications and reducing confusion.
complex application managementExperiment with Prompt Variations
Involves testing different phrasing, formats, and structures such as questions, commands, or open-ended statements to see which yields the best results. It emphasizes the importance of formatting in guiding the AI.
improve output qualityRefinement through Iteration
A technique where initial prompts are refined based on the responses received. It involves starting simple and adding details or clarity step-by-step to get closer to the desired output.
improve reasoning and output qualityQuery-Based Prompting
You pose a request as a question, encouraging the model to explain or provide code with context. This often results in more informative responses, including explanations alongside code. It is useful when you want both the solution and understanding of how to implement it.
improve reasoningIterative Refinement Prompting
You generate initial code and then iteratively ask the model to improve, extend, or modify it. This technique promotes a step-by-step improvement process, making complex features easier to develop incrementally.
improve output qualityStyle/Formatting Transformation
You ask the model to modify code to follow certain style guides, such as PEP 8, or to adhere to coding standards. This ensures the code meets organizational or community best practices.
improve output qualityFunction-by-Function Decomposition
The task is broken down into multiple sub-functions, each generated separately. This modular approach makes complex tasks manageable and enhances maintainability.
improve reasoningSkeleton (Template) Priming
A code skeleton or template is provided with placeholders, which the model fills in. This helps embed the generated code into a specific structure.
reduce hallucinationsReference-Heavy Priming
Extended references such as documentation or data schemas are provided, and the model is asked to generate code that complies with them. This aligns the output with standards or best practices.
reduce hallucinationsTemplate Prompting
Template Prompting uses a structured template to standardize prompts and responses, ensuring consistency and adhering to specific formats. It helps control output style and organization.
save tokens, improve output qualityPrompt Combination
Prompt Combination merges multiple prompts into a single, comprehensive prompt to generate richer and more nuanced responses. It is suitable for exploring complex topics.
complex topic explorationTemperature setting adjustments
Modifying the ‘temperature’ parameter to control randomness and creativity of the output. Higher values like 0.8 make outputs more diverse, while lower values like 0.2 make them more deterministic.
control randomness and creativityReACT (Reason + Act) Framework
ReACT integrates language models with external tools and resources, allowing them to perform multi-step reasoning and actions. The framework guides models to decide when to think and when to act, often involving external API calls or tool usage, thereby expanding their problem-solving capabilities.
solve complex problems, external tool integrationPrompt Engineering best practices
This set of practices includes using clear, concise instructions, providing relevant context, examples, and specifying output formats to optimize the quality of outputs from language models.
save tokens, improve output qualitySeries Prompting Technique
Break the prompt into multiple sequential prompts to produce more structured and informative results. This technique helps avoid irrelevant information and improves output quality by guiding the AI step-by-step.
improve output qualityExplicit Constraints and Guidelines
Stating specific rules or limitations such as word count, tone, style, format, or elements to include or exclude. This channels the AI’s response to meet specific requirements.
improve response targetingSystem Message for Role Setting
Starting the conversation with a system message that defines the AI’s role, expertise, tone, and response style. It helps maintain consistent and focused responses from the AI.
improve consistency and toneStep-by-step Instructions or Chain-of-Thought
Breaking down complex tasks into smaller, sequential steps and instructing the AI to explain its reasoning process at each stage. This enhances accuracy and thoroughness for complex problems.
improve reasoning and accuracyIterative Refinement and Follow-ups
Starting with an initial prompt, evaluating the AI’s response, and then providing follow-up prompts to refine and improve the output over multiple exchanges, mimicking a collaborative process.
improve quality over iterationsPrompt Structure (O1 Prompt)
Organizing prompts into a clear structure with a goal statement, response format, constraints, and context. This ensures the AI response aligns well with expectations.
maximize response quality and relevanceUsing System Prompts
System Prompts set the overall behavior of the model by providing an initial instruction that guides all subsequent responses, effectively setting a context or goal.
guide model behaviorTemperature and Top-p Tuning
Adjusting temperature and top-p parameters influences randomness and diversity in outputs, effectively controlling creativity in responses.
save tokens, control output styleExplicit Prompt Instructions
Including explicit instructions within prompts clarifies the task for the model, reducing ambiguity and improving response accuracy.
improve output qualityGuardrails
Implements ethical guidelines and constraints within prompts to ensure responses align with certain values, principles, or legal considerations.
reduce hallucinationsPrompt Templating
Creates reusable prompt structures with placeholders for variables, enabling consistency and efficiency in prompt formulation.
save tokensPersona and Task Specification
Guiding the AI with a specific persona and task description to improve the relevance and quality of the output. This involves defining who you are and what you want the AI to do.
improve output relevanceUse of Context
Providing background information or context to help AI generate more targeted and accurate responses. Context helps AI understand the situation or details relevant to the task.
improve output relevance and specificityNatural Language Usage
Writing prompts in natural, conversational language to make AI more receptive and to generate more human-like responses.
improve output quality and naturalnessInstruction Clarity
Providing clear, detailed instructions on what the AI should do, which leads to more precise and relevant results.
improve output quality and relevanceConciseness and Simplicity
Keeping prompts short, simple, and to the point to prevent confusion and get better results.
reduce hallucinations and increase clarityConversational Tone
Writing prompts as if speaking to a person, which makes the AI responses more natural and engaging.
improve naturalness and engagementChain Prompts
Chain Prompts involve breaking down complex tasks into a sequence of smaller, manageable prompts. The output from one prompt serves as input for the next, enabling multi-step reasoning and detailed outputs.
complex task decompositionExplicit Goal Specification
Clearly defining the goal in the prompt to guide the AI's behavior towards a specific task or outcome.
improve output qualityReturn Format Specification
Specifying the desired output format, such as bullet points, JSON, or numbered list, to structure the response.
improve output qualityWarnings/Constraints
Including guidelines on what the AI should avoid or emphasize, such as avoiding unverified facts or technical jargon.
reduce hallucinationsContext Dump
Providing relevant background information or data within the prompt to ground the AI’s responses in specific details.
improve reasoningGoal, Return Format, Warnings, Context Framework (Greg Brockman)
A structured approach to prompting involving clearly stating the goal, specifying return format, setting warnings or constraints, and providing relevant context.
general best practicesPrompt Chaining and Multi-turn Interactions
Breaking down complex tasks into multiple prompts, where the output of one informs the next, enabling multi-step reasoning.
maximize performance in complex tasksUsing Metadata and Embedded Data
Augmenting prompts with extra information, structured markup, or data retrieval to increase precision and control.
maximize performance in data-driven tasksBe as specific as possible
Specificity in prompts helps to minimize ambiguity, providing enough background, desired format, length, tone, and examples to direct the AI's response accurately. It involves including detailed context, structural preferences, output length, tone, style, and illustrative examples to guide the AI effectively.
improve output qualityGet better answers by providing data
Including specific, organized data such as numerical values, dates, categories, and sources enhances the relevance and depth of AI responses. Well-structured data allows the AI to analyze trends, perform comparisons, and generate insights suitable for decision-making and research. Providing real and contextual data is critical for high-quality, actionable outputs.
perform data analysisSpecify your desired output
Clearly articulating the output format—such as a report, timeline, bullet points, or a narrative—ensures the AI response matches your needs. Including preferences about tone, style, and elements to include (headings, bullet points, summaries) guides the structuring of the response, making it more usable and aligned with your goals.
structured report generationUnderstand the model's shortcomings
Awareness of AI’s limitations—like lack of real-time data, inability to access external systems, biases, and context understanding—helps craft realistic prompts. It mitigates risks of hallucinations and misleading responses by setting appropriate expectations.
realistic prompt craftingCombining Role and Instruction Prompting
This approach combines role prompting with explicit instructions to create a more directed and contextual prompt. It enhances the AI's understanding and output accuracy for specific tasks.
improve output qualityProviding Style Examples in Prompts
Supply sample text or style guidelines within your prompt to have ChatGPT mimic the tone or style. This helps in maintaining consistency and reducing editing effort.
improve output qualityRequesting Simplified Explanations (ELI5)
Ask ChatGPT to explain topics in simple, easy-to-understand language, often in the style of explaining to a child. This enhances clarity and comprehension.
improve explanation clarityExplicit Requirement Specification
Clearly state specific requirements, keywords, or constraints in your prompt to guide ChatGPT's output more precisely.
improve output accuracyContinuing Text Prompts
Provide the beginning of a text or script for ChatGPT to continue, ensuring style and context are maintained and improving coherence.
improve output qualityIncluding Multiple Examples (Few-Shot)
Give ChatGPT several example inputs and outputs to guide the model in producing more accurate, relevant responses, especially for classification and reasoning.
improve correctnessStep-by-Step Chain-of-Thought Reasoning
Encourage ChatGPT to show its reasoning process by prompting it to think through problems step by step, which improves arithmetic and reasoning accuracy.
improve reasoning accuracyExplicit 'Think Step by Step' Prompts
Simply instruct ChatGPT to think and explain every step involved in answering a question, leading to more accurate and transparent reasoning.
improve reasoningProviding Detailed Task Context and Instructions
This technique involves giving the language model specific context, relevant information, and clear instructions to help it better understand the task. It emphasizes structuring prompts as questions or commands to improve response accuracy.
improve output qualityIterative Prompt Development
The process of repeatedly refining prompts based on the model's responses, experimenting with variations, and improving prompt formulation to achieve desired results. It involves evaluation and fine-tuning.
improve output qualityStart Simple
Begin prompt design with basic, simple prompts to facilitate experimentation. Use straightforward prompts and progressively add complexity as needed. This allows for iterative testing and improvement.
improve output qualityAvoid Impreciseness
Be direct and clear in prompts, avoiding vague or overly clever language. Specific, concise prompts yield better, more predictable results.
improve output qualityBe specific and provide context
This technique involves giving clear, detailed instructions and relevant background to guide the model towards generating the desired output. It reduces ambiguity and helps the model understand the task better.
improve output qualityUse structured formats
Organizing prompts into clear sections, with headings, bullet points or placeholders, helps guide the model systematically through complex tasks. It enhances clarity and consistency of responses.
improve output qualityUse constrained outputs
Specifying the format or structure of the response directs the model to generate data in a particular shape, such as lists, tables, or specific formats, enabling more structured and usable outputs.
improve output quality4 key iteration methods
Techniques for refining prompts through a series of iterative adjustments to enhance the generated output.
improve output qualityText-to-text and text-to-image prompting
Different prompting styles depending on the output type, such as generating text or images with specific instructions.
generate specific contentPower-up strategies
Best practices to enhance prompts, such as emphasizing certain words, structuring prompts clearly, or adding constraints, to get better outputs.
improve output qualityConciseness and Relevance
This technique involves keeping prompts concise while maintaining relevance and clarity. It aims to avoid verbosity that could confuse the AI or lead to less relevant responses.
improve output qualityLeveraging Implicit Knowledge
Crafting prompts that tap into the AI's implicit knowledge base allows extracting insights informed by the training data. It involves asking questions that require the AI to use its learned understanding across a broad spectrum of topics.
improve reasoning and knowledge retrievalPrompt generator
A technique involving using a prompt generator tool to create initial prompts for AI models.
save tokens, improve output qualityBe clear and direct
A technique that emphasizes clarity and directness in prompts to improve response quality.
improve output qualityLet Claude think (chain of thought)
Encouraging the model to think step-by-step through chain-of-thought prompting.
improve reasoningGive Claude a role (system prompts)
Assigning a role to the model via system prompts to guide its responses.
improve output qualityLong context tips
Techniques for handling prompts with long contexts.
save tokens, improve reasoningExtended thinking tips
Prompting techniques that encourage models to think beyond the immediate response.
improve reasoningPrompt Refinement with Examples
Including specific examples within prompts helps guide the AI to understand the expected response more clearly, boosting the quality and relevance of the output. It serves as a form of in-context learning, giving the model a clear demonstration of the desired behavior.
improve output qualityLeveraging Larger AI Models
Utilizing larger AI models generally results in better prompt understanding and more accurate responses due to extensive training data and advanced pattern recognition. Larger models are more robust to prompt phrasing, reducing the need for meticulous wording.
improve understandingTopic-Specific Prompting Strategy
Tailoring prompts based on the AI's domain expertise ensures better responses. Using models trained or fine-tuned on specific topics yields more accurate and relevant answers, especially for niche or technical questions.
improve reasoning on niche topicsUnderstand the desired outcome
Define clear goals and expected results before interacting with AI. This involves planning what needs to be achieved and identifying the audience and actions involved.
improve output qualityDetermine the right format
Use a structured and consistent format based on the AI system's design and purpose. For example, art generators may require specific keyword placement, and prompts for reports may follow particular styles.
improve output qualityMake clear, specific requests
Create explicit and detailed prompts that precisely describe the task or question. Avoid vague questions and include all necessary details for the AI to understand the request.
improve output qualityDefine prompt length
Limit prompt length to what is necessary, considering token limits. Avoid overly long prompts that could be difficult for the AI to process.
save tokensChoose words with care
Use clear and unambiguous wording. Avoid slang, metaphors, and ambiguous expressions that could confuse the AI.
improve output relevancePose open-ended questions or requests
Frame prompts to encourage expansive responses rather than yes/no answers. This can lead to richer, more detailed output.
improve reasoning and detailInclude context
Provide relevant background information or specify the intended audience to tailor the response appropriately.
improve output relevanceSet output length goals or limits
Specify approximate response length or detail level, but be aware of the AI's inability to adhere strictly to exact limits.
guide output sizeAvoid conflicting terms and ambiguity
Ensure prompt language is consistent and unambiguous to prevent conflicting instructions such as 'detailed summary'.
improve output accuracyUse punctuation to clarify complex prompts
Employ punctuation such as commas, quotation marks, and line breaks to structure prompts clearly and prevent misinterpretation.
clarify prompt structureExplicit Instruction
Giving direct, explicit instructions within the prompt to guide the model's response more precisely.
improve output accuracy and specificityQuotation Use
Using quotation marks around prompts or specific instructions to clearly indicate what the model should focus on.
clarity and focus in responsesContent Transformation
Guiding the model to transform, edit, or adapt existing content or documents according to specified needs.
content adaptation and transformationTalk to the AI like you would a person
This technique involves engaging with the AI in a conversational manner, treating it like a person or colleague. It emphasizes the importance of interaction, personalization, and multi-step questioning to improve the quality of responses.
improve reasoningSet the stage and provide context
Providing background information or context helps the AI generate more focused and relevant responses. This technique involves framing the question with detailed context about your situation or needs.
improve output qualityKeep ChatGPT on track
Techniques to prevent the AI from drifting off-topic or fabricating information include asking it to justify its answers, cite sources, and re-read the prompt. These methods help ensure more accurate and focused responses.
reduce hallucinationsDon't be afraid to play and experiment
Encouraging active experimentation by trying different, creative prompts helps discover new ways to interact with the AI and improves prompt crafting skills. It involves playful and iterative testing of prompts and observing how the AI responds.
improve reasoningRefine and build on previous prompts
This iterative technique involves using the AI's previous responses as a basis for new questions or prompts, thereby deepening the interaction and gaining more detailed and nuanced answers.
improve reasoningPrompt Validation
Prompt validation involves systematically testing prompts using development and test datasets to ensure robustness and generalization. It includes controlling parameters like temperature and iterating prompts based on output quality. Validation helps in creating reliable prompts for large-scale deployment.
save tokensKeyword weight
Allows the user to adjust the importance of a keyword by using syntax like (keyword:factor) or through multiple parentheses or brackets, e.g., (keyword) or [keyword], to increase or decrease influence.
improve output quality() and [] syntax
Enables the user to modulate the influence of keywords by wrapping them in parentheses or brackets, where parentheses increase strength by approximately 1.1x, and brackets decrease it by approximately 0.9x. Multiple layers multiply the effect.
improve output qualityKeyword blending
Enables the combination of two keywords with a factor, using syntax like [keyword1 : keyword2: factor], to blend features from both over diffusion steps, influencing the overall output.
improve reasoning, generate hybrid imagesPrompt chunking with BREAK
Uses the keyword BREAK to split the prompt into separate chunks, each processed independently, allowing more control over different parts of the prompt within token limits.
reduce hallucinations, control compositionUsing custom models
Employs specific pre-trained or fine-tuned models to generate images in particular styles, which influences the effect of keywords in prompts.
style transfer, style consistencyForecasting pattern
This pattern involves providing the AI with data and asking it to make predictions or forecasts based on that data. It can include attaching documents in models that support it or pasting raw data in the prompt.
improve reasoningBuild on the Conversation
This approach involves creating a multi-turn interaction where you continue the conversation with follow-up prompts, allowing the AI to remember and expand on previous responses. It takes advantage of the context retention in chat-based systems.
improve reasoning and coherence in dialoguesSelf-Consistency with CoT
Self-Consistency involves generating multiple reasoning chains (multiple CoT outputs) and selecting the most consistent answer among them. This approach aims to improve the reliability and accuracy of the final output, especially in complex reasoning tasks.
improve output qualityDefine Communication Channel and Audience
Providing context about the output format and target audience helps ChatGPT to generate tailored responses suited to the context, such as creating a YouTube script or a technical article.
improve output qualityChained Prompts
Using multiple prompts sequentially allows for iterative refinement of content, ensuring that responses match specific needs and include necessary details or keywords. It helps address response length limits and improves relevance.
improve output qualityFormat Output in Markdown
Instructing ChatGPT to produce output in markdown format results in better structuring, such as headings, tables, and lists. It enhances readability and usability of generated content.
improve output qualityGenerate Its Own Prompts
Leveraging ChatGPT to create prompts for itself can automate the generation of multiple query options, broadening research avenues and improving prompt design.
boost creativity and diversity of promptsFormat Responses in Markdown
Requesting markdown formatted output helps organize responses into clear sections, headings, lists, and tables, improving readability and structured presentation.
improve output formattingSpecificity and Clarity Enhancement
A technique emphasizing the importance of being specific, clear, and concise in prompt formulation to improve response relevance. It involves adding detailed instructions or clarifications to initial prompts.
improve output qualityIteration and Refinement
A prompting method where multiple prompts are iteratively used to clarify or refine the required information, leveraging the model's response variability to achieve the best results.
improve reasoning, reduce hallucinationsDirect Addressing of the Model
A communication style that involves directly addressing the model as 'You' to make instructions more explicit and effective, encouraging the model to follow directives more precisely.
improve output qualityUse of specific and varied examples
Including clear, targeted examples within prompts to help the model understand patterns and generate more accurate and focused outputs.
improve output qualityConstraints and limitations
Applying constraints within prompts to narrow the scope of the response, such as limiting length, scope, or format, to avoid inaccuracies and off-topic results.
improve output qualitySelf-evaluation prompts
Instructing the model to evaluate, rate, or check its own responses before finalizing to improve quality and correctness.
improve output qualityRepetition and emphasis
Reiterating key words or instructions within prompts or exaggerating instructions to ensure clarity and importance.
save tokens, improve clarityPrompt refinement and iteration
Repeatedly rewriting and refining prompts, such as adding key phrases or stress points, to improve the output quality continuously.
improve output qualityTemperature Tuning
Adjusting the temperature parameter controls randomness in the model's output. Lower values produce more deterministic responses, while higher values generate more diverse outputs. This is a parameter setting rather than a prompt technique.
save tokens, diversify outputPrompt Formatting and Structuring
Carefully designing and structuring prompts with clear instructions, bullet points, or specific formats can improve the clarity of the generated responses. Proper formatting guides the model to produce more relevant and organized output.
improve output clarity and relevanceUnderstand Your Objective
Clearly define what you want to achieve with your prompt, such as information, creativity, or problem-solving. This helps in shaping an effective prompt that aligns with your goals.
improve output relevanceKeep It Clear and Concise
Avoid overly complex or vague prompts by focusing on clarity. Clear prompts lead to better responses from the AI and reduce ambiguity.
improve output qualityExperiment and Iterate
Refine your prompts based on the responses received. Testing different phrasings and structures helps find the most effective prompt.
improve output quality and relevanceConsider Your Audience
Tailor prompts to suit the knowledge level and interest of the audience who will use or benefit from the AI's output.
improve accessibilityEvaluate and Adapt
Continuously assess the effectiveness of prompts and make adjustments based on performance and feedback to optimize results.
improve output qualityMark parts with XML tags
Claude has been fine-tuned to pay special attention to XML tags. Using them to clearly separate sections of the prompt (instructions, context, examples, etc.) can improve its understanding and response accuracy.
improve output qualityProvide clear task descriptions
Clarify the instructions given to Claude, specifying exactly what is expected to avoid ambiguity and improve the relevance and accuracy of the responses.
improve output qualityKeep responses aligned to the desired format
Specify exactly what format you want the response in (e.g., JSON, XML, markdown) to prevent unwanted chatty answers and ensure usability.
reduce hallucinationsDefine a persona to set tone
Setting a persona helps Claude reply in a tone and style that is appropriate for the context, making the interaction more natural and effective.
improve output qualityAllow Claude to say 'I don't know'
Explicitly instruct Claude to admit when it is uncertain, reducing hallucinations and increasing trustworthiness of responses.
reduce hallucinationsUse long context window effectively
Utilize Claude’s extended context window to include extensive information, enabling handling of complex, data-rich prompts.
improve reasoningvocabulary of 33 terms
A comprehensive vocabulary that defines 33 key terms related to prompt engineering, helping standardize the language used in the field.
standardize prompt terminologyDecomposition
Break down complex tasks into smaller, more manageable parts to facilitate better understanding and responses.
improve reasoningSelf Consistency
Self-Consistency combines sampling different responses with a high temperature and then using majority voting to select the most consistent answer. This method improves robustness and accuracy by considering multiple reasoning paths.
reduce hallucinations, improve output qualityTask-specific Prompts
Prompts tailored to specific tasks or applications, such as question-answering or commonsense reasoning, to improve the model's performance on those tasks.
improve reasoning/output accuracyNatural Language Instruction Prompts
Prompts that are written in natural language to provide context or instructions to guide the model toward the desired behavior, often used in zero-shot or few-shot learning settings.
guide behavior, improve output relevanceLearned Vector Representations
A method where prompts are represented as learned vector embeddings that activate relevant knowledge within the model, often used in prompt tuning or prompt learning techniques.
specialized prompt tuningAutomatic Prompt Generation (AutoPrompt)
Automatically generating prompt templates or verbalizers using algorithms or heuristics to improve performance or automate the prompt design process.
automate prompt designPrompt Pattern Catalog
Creating a catalog of different prompt patterns for specific tasks to improve prompt design and enable systematic exploration of prompt variations.
systematic prompt explorationZero-Shot Reasoning Prompts
Use simple, explicit prompts like 'Let's think step by step' to elicit reasoning without few-shot examples. Small phrasing changes can significantly boost reasoning ability in zero-shot scenarios.
improve reasoningPERFECT framework
A comprehensive prompt engineering framework introduced in the paper, aimed at enhancing the performance and generalization of language models. It provides structured methodologies for constructing prompts to adapt models to a variety of NLP tasks and domains, fostering nuanced and flexible responses.
improve output qualityContext-aware prompts
Designing prompts that incorporate context specific information to improve relevance and accuracy of model responses. This technique involves embedding necessary background or situational data within the prompt to guide the model effectively.
improve reasoning and contextual understandingInteractive prompts
Creating prompts that allow for interaction, follow-up questions, and dynamic engagement with the model to refine responses and adapt to user needs. This approach enhances flexibility and the ability to guide models through iterative dialogue.
interactive and adaptive promptingPrompting with rhetorical strategies
Using rhetorical techniques within prompts to steer the AI's responses, making them more useful, persuasive, or aligned with human communication styles. This includes strategies like persuasion, emphasis, framing, and others to shape the output.
make AI responses more useful and human-likePrompt Engineering and Crafting
The process of carefully designing prompts with specific instructions, formatting, or context to elicit better responses from the model. It includes techniques like adding constraints or context.
improve output qualityDivide and Prompt (Text-to-SQL)
A method that leverages CoT prompting by breaking down Text-to-SQL tasks into subtasks, improving accuracy.
save tokens/improve reasoning in specific tasksTask-Oriented Prompting
Involves clearly defining the goal of a request to guide the AI to understand the specific task, ensuring relevant and targeted responses. It often includes specifying the depth, tone, and structure needed for the output.
improve output qualityStep-by-Step Guidance (Chain-of-Thought Prompting)
Encourages the AI to explain its reasoning process in detail by breaking down complex problems into structured steps. It enhances accuracy and transparency, especially for mathematical and logical tasks.
improve reasoningSelf-Reflection Prompts
Designs prompts that ask the AI to evaluate or review its own responses, leading to improved accuracy and deeper analysis. It helps in iterative content generation and error correction.
improve reasoningIterative Prompting
Involves refining prompts in stages to get more detailed or accurate responses, breaking down complex questions into sub-questions.
save tokensData Interpretation and Analysis
Framing prompts to analyze data, interpret trends, or compare different datasets, often requesting insights or strategies.
improve reasoningPersonalized Learning Assistance
Structuring prompts to act as a tutor, including step-by-step explanations, quizzes, or tailored questions based on the learner's level.
improve output qualityBusiness Strategy and Decision-Making
Guiding the AI to evaluate strategies, perform SWOT analyses, or assess risks based on specific business questions.
improve reasoningTechnical Explanations and Troubleshooting
Providing detailed, specific prompts for technical tasks, code debugging, and explanations of complex concepts.
improve reasoningRefining and Iterating Responses
Continuously improving AI output by refining prompts based on previous responses, adding clarity or specificity.
save tokensLeveraging Multi-Perspective Analysis
Encourages the AI to evaluate multiple viewpoints or stakeholder perspectives to enrich the response.
improve reasoningIncorporating Feedback Mechanisms
Involves asking the AI to evaluate or critique its own output to improve quality and correctness.
improve reasoningFormatting for Readability and Clarity
Specifies formatting guidelines to produce well-structured, easy-to-read outputs, such as tables, lists, or sections.
improve output qualityPrompt Customization for Different Audiences
Tailoring prompts to match the audience’s knowledge level, such as simple explanations for laypeople or detailed technical breakdowns for experts.
improve output qualityVocabulary of Prompting Terms
Establishes a set of 33 vocabulary terms to standardize the language used in prompt design and analysis.
standardize terminologyImport External Content for Analysis
Attach or reference external documents, articles, or PDFs for ChatGPT to analyze. This can include extracting key points, simplifying language, or identifying patterns.
analyze external contentCreate Custom Organizational Formats
Request information to be organized into tables, bullet lists, comparison charts, or formatted in CSV. It helps in categorizing or comparing data visually.
structured data presentationGenerate AI-Assisted Visuals
Provide detailed art directions for image generation, including style, perspective, lighting, and composition, utilizing integrated image generation models like DALL-E.
visual content creationApply Response Constraints
Set clear boundaries such as answer length, number of paragraphs, or word count to obtain concise and focused responses.
focused answersCreate Prompts for Other AI Models
Design structured prompts or templates specifically tailored for other AI tools like Midjourney or Claude, by using ChatGPT to generate those prompts.
cross-model prompt engineeringTransform Lists and Datasets
Ask ChatGPT to alphabetize, categorize, prioritize, or otherwise organize data, lists, or ideas to save manual effort.
data organizationRequest Expert Feedback
Use ChatGPT to evaluate or review content from specific professional perspectives, like marketing or academia.
expert-level critique and feedbackOutput Formatting
Specifying how the output should be formatted, such as list, table, or specific style, to meet user needs.
improve output clarityStyle Instructions
Guidelines within prompts that dictate the style or tone of the generated output, such as formal, humorous, or concise.
improve output styleRephrasing/Question Reformulation (ReRead)
Changing the phrasing of a question or prompt to improve clarity or guide the model's response.
improve reasoning and clarityAnswer Shape
Modifying the structure of the answer, such as making it a list, paragraph, or specific format.
control output formatAnswer Space
Specifying the domain or type of answers expected, to guide the model's responses.
control output styleRegex
Using regular expressions to extract structured answers from raw model outputs.
answer extractionSeparate LLM
Using a separate language model or model instance dedicated to answer extraction, separate from response generation.
answer extractionTranslate First Prompting
A multilingual prompting strategy where the input is first translated into a target language before processing.
multilingual supportX-InSTA Prompting
In-context learning approach for multilingual settings involving cross-lingual transfer.
multilingual in-context learningPARC (Prompts Augmented by Retrieval Cross-lingually)
A prompt enhancement technique that uses retrieval of cross-lingual data to augment prompts.
multilingual retrieval augmentationMultimodal In-Context Learning
Using multiple modalities such as text and images together for in-context learning.
multimodal reasoningPrompt Modifiers
Adjustments made to prompts to influence the model's understanding or response, especially in multi-modal contexts.
multimodal prompt tuningTool Use Agents
Agents that utilize external tools or systems to enhance their capabilities, like calculators or search engines.
augment reasoning with external toolsObservation-Based Agents
Agents that incorporate external observations, environment feedback, or sensory data in their reasoning.
dynamic reasoningReasoning and Acting (ReAct)
A framework enabling models to reason through steps and take actions interactively, often in a loop.
interactive reasoning and decision makingLifelong Learning Agents
Agents that continually learn and adapt from ongoing interactions and data.
continual learningGhost in the Minecraft (GITM)
An agent concept where the model simulates or controls a character within a virtual environment like Minecraft.
virtual environment interactionVerify-and-Edit
A process where generated outputs are verified and then edited for correctness or quality.
output quality and correctnessDemonstrate-Search-Predict
A chain where the model searches for relevant information, predicts, and then refines the output.
integrate external knowledge and reasoningIterative Retrieval Augmentation
Repeatedly retrieving and incorporating external data to progressively improve responses.
knowledge refinementPrompt Sensitivity
The degree to which prompt phrasing affects the model's responses, indicating the importance of prompt stability.
robustness and reliability analysisVerbalized Score
Expressing confidence or scores in a verbal manner to calibrate the model's outputs.
calibration and trustworthinessVanilla Prompting
The simplest form of prompting without additional techniques or modifications.
basic promptingAutomatic Prompt Optimization (APO)
A set of automated techniques aimed at improving the performance of large language models on various NLP tasks by optimizing prompts automatically.
improve output qualityAutomated Prompt Search/Generation
Techniques that automate the search or generation of effective prompts, often using algorithms or machine learning models to find optimal prompts.
save tokens, improve reasoningPrompt Engineering Patterns Catalog
A collection of prompt engineering techniques organized in pattern form, aiming for knowledge transfer and reusable solutions in prompt design.
improve output qualityOutput Structuring Patterns
Techniques that focus on controlling the format and structure of the output from the LLM, such as emphasizing specific formats or information organization.
improve output qualityInstruction Tuning with Prompts
Providing explicit instructions within prompts to guide the LLM's behavior towards desired qualities or actions.
improve reasoning, improve output qualityPrompt Variations and Sensitivity Testing
Creating different prompt variations to test the LLM's responses and select the most effective prompt structure.
improve output qualityTask-Specific and Critique Prompting
Custom prompts created for a specific task or to critique and refine responses. They help tailor the output to precise needs or improve ongoing responses.
improve output qualityStructured Prompts
Clear and concise prompts that provide explicit instructions or output formats to ensure the response adheres to desired structure.
improve output qualityReinforcement of Reasoning with Tokens
Increasing the length of reasoning chains or tokens in the prompt to enhance performance in reasoning tasks.
improve reasoningCreate a . . .
This prompt instructs the AI to generate a specific type of document or content. It begins with the phrase "Create a" followed by what is needed, such as a script, poem, or email. It's one of the most common prompt formats for requesting content creation.
generate contentComplete this sentence . . .
The prompt provides a sentence or phrase for the AI to complete, helping to focus or steer open-ended questions into more specific responses. Useful in generating conclusions or continuations.
focus outputShow this as a . . .
Requests the AI to convert data or raw information into a different format, such as a graph or chart. It helps visualize data through text-based representations.
data visualizationWrite a list of . . .
Instructs the AI to generate a list of items, such as ideas, titles, or recommendations. Useful for brainstorming or idea generation.
idea generationHackAPrompt
A framework and paper based on an online challenge to explore prompt hacking techniques and build a taxonomy of prompt attacks, useful for testing the robustness of user-facing LLM interfaces.
reduce hallucinationsSystematic Literature Review Technique (PRISMA or other systematic review methods)
A formal process used to collect, review, and synthesize papers or research studies in a systematic fashion, ensuring comprehensive coverage of the literature, sometimes aided by AI in the review process.
improve output qualityStructured Output Prompting
Designing prompts to generate outputs in structured formats like JSON, tables, or graphs for easier interpretation and downstream processing.
improve output qualitySystematic classification of prompting techniques
The paper performs a systematic literature review to identify existing prompting techniques for code generation, and evaluates a subset for secure code generation.
investigate prompting strategiesZero-Shot Learning Prompts
The prompt instructs the model to perform a task without providing examples, relying on the model's general knowledge to generate the response.
improve output qualityLogically Remarkable and Advances Generative AI Responses
A technique that enhances the logical reasoning capability of a language model, potentially leading to more accurate and coherent responses. This involves integrating logical reasoning frameworks into prompts.
improve output qualityToolformer
Teaches LLMs to identify when to use external tools, specify which tools to invoke, and how to incorporate their output into the response. It enables the model to perform tasks requiring external data or computation.
integrate external toolsChameleon
A modular framework where a controller generates natural language programs that compose and execute a wide range of tools, including other models, vision systems, and web searches. It handles multimodal reasoning tasks.
integrate external and multimodal toolsXoT
XoT is a novel prompting technique designed to enhance the problem-solving abilities of large language models (LLMs) by structuring thoughts in a flexible and scalable manner. It aims to overcome the limitations of existing methods like IO, CoT, ToT, and GoT, providing a more adaptable way to leverage LLMs' reasoning capabilities.
improve reasoningRe-Read Prompting
Re-Read Prompting involves having the model review or re-read the prompt or previous output to better understand or refine its response. This technique is meant to improve the model's comprehension and output quality by multiple passes over the prompt or response.
improve output qualitySelf-Harmonized Chain-of-Thought (ECHO)
ECHO enhances traditional Chain-of-Thought (CoT) prompting by refining multiple reasoning paths into a cohesive, unified pattern through an iterative self-harmonization process. It clusters questions, generates rationales with Zero-Shot-CoT, and iteratively unifies these demonstrations to improve reasoning accuracy. This method aims to address diversity issues in auto-CoT and reduce manual effort in few-shot CoT.
improve reasoningInstruction Fine-Tuning
Fine-tuning the model with a specific dataset to specialize it for certain tasks or to follow specific instructions more accurately. This method improves the model's performance in targeted applications.
improve output qualityDomain Priming
Instructs the AI to adopt a specific role or perspective, emphasizing specialized knowledge or viewpoints. It enhances relevance and depth by guiding the AI's response based on a chosen persona or expertise.
improve output qualityConceptual Combination
Prompts the AI to merge two unrelated concepts to generate novel ideas, stories, or solutions. It stimulates creative output and innovative thinking across domains.
generate ideasSocratic Questioning
Uses probing questions to explore deeper insights, challenge assumptions, or stimulate critical thinking through a series of targeted prompts.
improve reasoningTeaching Techniques in Prompting
Specific prompting techniques used in educational contexts, such as graduated guidance, time delay, or physical prompts to help children learn new skills effectively.
improve learning outcomesVisual Aids and Cues
Using visual aids like pictures, diagrams, or physical cues to assist understanding and execution of tasks.
improve reasoning and learningModeling and Demonstration
Showing a correct way of performing a task as a form of prompting to demonstrate the desired behavior.
teach a new skillFading Prompts
Gradually reducing the level of assistance or prompts as the learner gains independence.
teach a kid a new task efficientlyErrorless Learning
A prompting strategy that minimizes errors during learning by providing prompts immediately before an error would occur.
maximize learning with minimal frustrationSelf-Generated Knowledge + Exemplars
An extension of analogical prompting where the LLM is asked to identify core concepts or knowledge within a problem and generate high-level explanations or tutorials before solving. This approach helps in addressing complex tasks, especially in coding or STEM problems, by encouraging the model to focus on fundamental concepts rather than just low-level exemplars. The technique involves instructing the model to first generate relevant high-level knowledge, then proceed to solve.
improve reasoning, enhance understandingPrompt Transformation
This technique involves evaluating and transforming imperfect prompts into more effective ones for better interactions with generative AI. It emphasizes the inventive reworking of prompts to optimize their effectiveness. The overall goal is to refine prompts for superior output quality.
improve output qualitySelf-consistency decoding
A method where multiple chain-of-thought reasoning paths are generated and the most common conclusion is selected. It enhances the reliability of the model's output by aggregating multiple reasoning attempts.
improve reasoning accuracyPrompting to disclose uncertainty
A prompting technique that encourages the model to give estimates of its uncertainty by analyzing the likelihood scores of its token predictions, which are usually not explicitly shown.
reduce hallucinations, improve reliabilityPrompting to estimate model sensitivity
A method that addresses the high sensitivity of LLMs to prompt formatting and structure by systematically analyzing different prompt formats or using metrics that evaluate performance distribution across multiple prompts.
improve robustnessUsing language models to generate prompts
A meta-prompting technique where one language model is used to generate prompts or instruction examples for another model, often using beam search or clustering to find effective prompts.
automate prompt creationPrompt formats
Techniques that involve structuring prompts using formats that specify the description, style, lighting, and other artistic factors in text-to-image generation to control output quality.
improve output qualityArtist styles
Using specific artist names or art styles in prompts to generate images in a particular visual style, such as 'in the style of Vincent van Gogh'.
style control in image generationTextual inversion and embeddings
Creating new word embeddings through optimization over example images to allow for specific styles or concepts to be included in prompts as pseudo-words.
style transfer, concept embeddingUsing gradient descent to search for prompts
A method of optimizing prompt vectors or token sequences via gradient descent to maximize the likelihood of desired outputs, also known as soft prompting or prompt tuning.
automatic prompt optimizationChaining Prompts
A technique of breaking down complex tasks into a series of simpler prompts, where each prompt builds upon the previous one's response. It involves identifying an overall objective, segmenting the task into sub-tasks, ordering them, crafting specific prompts for each, chaining them logically, and refining iteratively.
solve complex tasksGuided Reasoning
A method that structures prompts to lead AI through a step-by-step reasoning process, especially for analytical and complex tasks. This includes using templates and frameworks such as data analysis pipelines or creative development processes to ensure thorough output.
improve reasoningFlowchart for Breaking Down Complex Tasks
A structured approach that involves visually breaking down a complex task into steps, organizing them sequentially, and crafting prompts for each step to guide the AI systematically through the process.
solve complex tasksTemplate and Framework Prompts
Using predefined templates or frameworks to guide AI in specific tasks. Examples include data collection analysis, creative idea generation, or business strategy formulation, which help the AI follow a logical progression.
improve output qualityScenario-based Prompting
Crafting prompts around real-life case studies, where a sequence of prompts guides the AI through problem-solving, strategy development, or creative generation tailored to a specific scenario.
solve real-world problemsPrompt Combining
Combining different prompting techniques like instructions, roles, and few-shot examples into a single prompt to leverage multiple strategies for better results.
enhance output qualityPriming Prompt
Priming chatbots involves structuring prompts to guide the chatbot’s responses. This technique aims to influence the behavior and output of chatbots for specific goals by setting context or expectations beforehand.
improve output qualityPrompt Hacking
Prompt hacking refers to understanding and working around the limitations of LLMs, such as hallucinations and biases, to get more accurate and reliable outputs. This involves strategic prompt design to mitigate issues.
reduce hallucinations, mitigate biasesInstruction Tuning (Prompt Tuning)
Instruction tuning involves training or fine-tuning the model with prompts that include instructions, making it better at following explicit commands. It can also include prompt tuning, which optimizes prompts as model inputs.
improve output qualityModularization of prompts
Decomposing complex programming problems into smaller, independent reasoning steps to facilitate better understanding and solution. The approach promotes hierarchical problem-solving and structuring reasoning with a Multi-Level Reasoning Graph (MLR Graph).
improve reasoningEEDP
A novel prompting technique tailored to semi-structured documents. It aims to improve mathematical reasoning and understanding in financial document question answering tasks. It matches or outperforms baseline performance while providing nuanced insights into LLM capabilities.
improve reasoningChain-of-Dictionary (CoD)
Chain-of-Dictionary (CoD) is a novel prompting technique that improves multilingual machine translation (MNMT) by adding chained multilingual dictionary entries to the prompt. It augments the translation task with explicit translations of key words in multiple auxiliary languages, providing the model with chained lexical hints to enhance translation accuracy, especially for low-resource or rare words.
improve output qualityRole Prompting / Role Playing
Implying a role or persona in the prompt to guide the model to generate responses in a specific manner or perspective. This can involve instructing the model to act as a tutor, a lawyer, or a specific character.
context-specific outputTailored Prompting Technique for Semi-Structured Documents
This technique involves customizing prompts specifically designed for semi-structured documents like complex tables and unstructured text in financial documents. It aims to improve the LLMs' ability to extract and reason over structured data. It is a novel approach introduced in the paper for handling complex data structures.
improve output qualitySelf-Planning prompting
Allows the model to formulate a detailed, step-by-step plan before executing the task. It involves the model creating a structured plan based on the problem description and then executing it incrementally, enhancing planning and organization.
improve output qualityModularization of Thoughts (MoT) prompting
Introduces a structured, hierarchical approach inspired by software modularization principles. It decomposes complex programming problems into smaller, independent, and interdependent modules, organized as an MLR (Multi-Level Reasoning) Graph, enhancing understanding and alignment between reasoning steps and code.
improve reasoning, enhance output qualityTree-Structured Prompting
Tree-Structured Prompting is a technique where prompts are organized in a tree-like structure, allowing complex, hierarchical interactions with the LLM. It's useful for guiding models through multi-step reasoning or structured tasks. This technique leverages branching prompts to explore different pathways or solutions.
improve reasoningTwo-step pipeline for SCoT and code generation
A process where a SCoT is first generated with potential error-checking and then used as input to generate final code, allowing debugging and validation of intermediate reasoning steps.
improve output reliability and correctnessPrompt engineering with examples of structured reasoning
Designing prompts that include several examples of natural language requirements paired with structured reasoning (SCoT) and code, to teach LLMs how to generate structured reasoning steps.
improve model understanding and output qualityBranching Inquiry Structure
A structured approach within the Tree of Thoughts method that involves creating a branching flow of related questions or tasks, which can be expanded or pruned depending on the relevance and progress. This helps guide the AI's focus and facilitate multi-path exploration.
improve output qualityPrompt engineering or prompt tuning
Designing and optimizing prompts to elicit the best possible responses from language models. This includes experimenting with prompt phrasing, structure, and additional context to enhance output relevance and quality.
improve output qualityRole-playing or persona-based prompting
Assigning the model a specific role, persona, or perspective within the prompt to generate responses aligned with that role. This can be used to obtain creative, empathetic, or domain-specific output.
reduce hallucinationsObservation-Based Reasoning
Observation-Based Reasoning is a prompting technique inspired by the scientific method that involves systematic observation, question generation, hypothesis formation, testing, refinement, and conclusion. It aims to enhance reasoning in language models by mimicking the scientific discovery process and encouraging models to generate and verify hypotheses based on observations before reaching a conclusion.
improve reasoningSelf-Interrogation Prompting (SQuARE)
A novel prompting technique where the model generates a sequence of auxiliary questions related to the main question and then attempts to answer them. This approach aims to promote more thorough exploration of the topic, leading to better reasoning and answer accuracy.
improve reasoningPlan-and-Solve Plus (PS+)
A prompting framework that enhances LLM reasoning by using detailed instructions for variable extraction, calculation, and dividing problems into subtasks. It aims to reduce errors like missing steps and calculation mistakes by structuring the problem-solving process.
improve output qualityInstruct prompt tuning
Using specific instructions in the prompt to tell the model exactly what to do, which reduces token consumption and improves clarity. Involves fine-tuning the model on instruction datasets or using explicit instructions.
improve output qualityLogic-of-Thought (LoT)
A prompting approach leveraging propositional logic to generate expanded logical information from input context, acting as an augmentation to enhance logical reasoning.
save tokens, improve reasoningTown Hall-Style Debate Prompting
A prompting technique where a single LLM simulates a multi-persona debate involving multiple entities. Each persona presents their reasoning, refutes others, and eventually votes or reaches a consensus. It aims to leverage divergent perspectives within one model to improve reasoning and decision accuracy.
improve reasoningRule-Based Prompting
A novel prompting technique introduced to generate code-mixed sentences by applying specific rules to guide the language model's output.
generate code-mixed sentencesextbackslash method
A novel prompting technique that involves utilizing existing code and guided analysis to improve code generation with Large Language Models. It incorporates mechanisms like requirement analysis, example retrieval, and iterative code refinement.
improve output qualityExample retrieval
A technique that searches and retrieves similar programs or code examples based on requirements. A selector then filters these examples to identify the most relevant and informative ones, which are used as context for code generation.
save tokens, improve outputPrompt construction with triples
A method of structure the prompt with examples that include requirement, preliminary analysis (like test cases), and code. The prompt ends with a new requirement, guiding the LLM to produce intermediate analysis first.
improve output qualityIterative n-gram based example selection
A selection algorithm that iteratively chooses examples based on their novelty and relevance, measured by overlap of n-grams and decay of redundancy. It aims to select diverse and informative examples for prompting.
reduce hallucinations, improve relevancePriming techniques
Methods that involve carefully designing the initial prompt to set the context or behavior of the LLM, including explicitly mentioning constraints or desired characteristics.
improve output quality, securityRefinement-based prompting
Iteratively refining the generated code by providing feedback or improvements after initial generation, aimed at enhancing security or correctness.
reduce hallucinations, improve securityReasoning-based prompting
Guiding the LLM with prompts that explicitly require reasoning, such as logical deductions or mathematical computations, to generate more accurate or secure code.
improve reasoning and securityPrompt Chain techniques
Connecting multiple prompts in sequence where the output of one prompt feeds into the next, to build complex tasks step-by-step.
complex task decompositionLeast-to-Most Reasoning
A multi-step prompting method where complex reasoning tasks are broken down into simpler sub-steps, enabling better handling of complex symbolic and mathematical reasoning.
improve reasoningPrompting Techniques - Single/Multi-step
Distinguishes between techniques that solicit a final response in one step versus multiple iterative prompts, affecting cost and effectiveness.
save tokensSelf-regularization Prompting
A technique where prompts are designed to encourage the model to adhere to certain behaviors or constraints, effectively regularizing its outputs for stability and controllability.
reduce hallucinations and improve reliability