Generated Knowledge
Generated Knowledge prompting is a two-step technique where the model is first asked to generate relevant facts, context, or background information, and then uses that information to answer the main question. This approach leverages the model’s ability to synthesize supporting knowledge, improving the quality and depth of its final response.
This method is especially useful for tasks that require background knowledge, multi-step reasoning, or context that may not be explicitly provided in the prompt.
Use When
- The task requires background knowledge or context.
- You want to improve answer quality by providing supporting information.
- The model benefits from explicit intermediate steps.
Pattern
- Ask the model to list relevant facts, context, or background information.
- Use those facts in a follow-up prompt to answer the main question.
Examples
Example 1: Environmental Science
List three facts about the Amazon rainforest.
Now, using those facts, explain why the Amazon is important for the environment.
Model output:
Facts:
1. The Amazon rainforest produces about 20% of the world's oxygen.
2. It is home to a vast diversity of plant and animal species.
3. The Amazon acts as a major carbon sink, absorbing large amounts of CO2.
Explanation:
The Amazon is important for the environment because it helps regulate the Earth's oxygen and carbon cycles, supports biodiversity, and mitigates climate change by absorbing carbon dioxide.
Example 2: Business Analysis
Step 1: Generate three key facts about remote work productivity.
Step 2: Using these facts, explain why companies should consider implementing hybrid work policies.
Model output:
Facts:
1. Studies show remote workers are 13-50% more productive than office workers
2. Remote work reduces employee turnover by 25-40%
3. Companies save an average of $11,000 per year per remote employee on overhead costs
Explanation:
Based on these productivity gains, retention benefits, and cost savings, companies should implement hybrid work policies to capture the benefits of remote work while maintaining team collaboration and company culture through periodic in-person interaction.
Benefits
- Depth: Produces richer, more informed answers.
- Transparency: Makes the reasoning process explicit and auditable.
- Flexibility: Can be adapted to a wide range of knowledge-intensive tasks.
Pitfalls
- May introduce errors if generated facts are incorrect or hallucinated.
- Requires careful prompt design to maintain relevance and coherence.
- Can increase prompt length and complexity.
References
- Liu, J., et al. (2022). Generated Knowledge Prompting for Commonsense Reasoning. ACL 2022.
- Petroni, F., et al. (2019). Language Models as Knowledge Bases?. EMNLP 2019.
- Roberts, A., et al. (2020). How Much Knowledge Can You Pack Into the Parameters of a Language Model?. EMNLP 2020.