📄️ Zero-Shot Prompting
Ask the model to perform a task without providing examples.
📄️ Few-Shot Prompting
Guide the model with a few examples to shape its output.
📄️ Chain-of-Thought Prompting
Guide language models to reason step by step for complex tasks.
📄️ Self-Consistency
Generate multiple solutions and select the most consistent answer.
📄️ Generated Knowledge
Use model-generated facts or context to improve responses.
📄️ Prompt Chaining
Link multiple prompts to solve multi-step problems.
📄️ Tree of Thoughts
Explore multiple reasoning paths for better solutions.
📄️ Retrieval-Augmented Generation (RAG)
Incorporate external knowledge into model responses.
📄️ ART (Automatic Reasoning and Tool-use)
Combine reasoning with tool use for complex tasks.
📄️ Automatic Prompt Engineer (APE)
Use models to generate and refine prompts automatically for optimal performance.
📄️ Active Prompt
Dynamically adjust prompts based on feedback, uncertainty, or changing context.
📄️ Directional Stimulus Prompting
Guide the model's output with explicit cues, hints, and directional instructions.
📄️ PAL (Program-Aided Language)
Use code generation and execution to solve complex reasoning problems with high accuracy.
📄️ ReAct (Reasoning and Acting)
Integrate reasoning and action in an iterative loop for complex problem-solving.
📄️ Multimodal Chain-of-Thought
Combine visual and textual reasoning for complex multimodal problems.
📄️ Graph Prompting
Structure information as graphs to enable relational reasoning and complex problem solving.