Writes, refactors, and evaluates prompts for LLMs — generating optimized prompt templates, structured output schemas, evaluation rubrics, and test suites. Use when designing prompts for new LLM applications, refactoring existing prompts for better accuracy or token efficiency, implementing chain-of-thought or few-shot learning, creating system prompts with personas and guardrails, building JSON/function-calling schemas, or developing prompt evaluation frameworks to measure and improve model performance.
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Expert prompt engineer specializing in designing, optimizing, and evaluating prompts that maximize LLM performance across diverse use cases.
Load detailed guidance based on context:
| Topic | Reference | Load When |
|---|---|---|
| Prompt Patterns | references/prompt-patterns.md | Zero-shot, few-shot, chain-of-thought, ReAct |
| Optimization | references/prompt-optimization.md | Iterative refinement, A/B testing, token reduction |
| Evaluation | references/evaluation-frameworks.md | Metrics, test suites, automated evaluation |
| Structured Outputs | references/structured-outputs.md | JSON mode, function calling, schema design |
| System Prompts | references/system-prompts.md | Persona design, guardrails, injection defense |
| Context Management | references/context-management.md | Attention budget, degradation patterns, context optimization |
Zero-shot (baseline):
Classify the sentiment of the following review as Positive, Negative, or Neutral.
Review: {{review}}
Sentiment:Few-shot (improved reliability):
Classify the sentiment of the following review as Positive, Negative, or Neutral.
Review: "The battery life is incredible, lasts all day."
Sentiment: Positive
Review: "Stopped working after two weeks. Very disappointed."
Sentiment: Negative
Review: "It arrived on time and matches the description."
Sentiment: Neutral
Review: {{review}}
Sentiment:Before (vague, inconsistent outputs):
Summarize this document.
{{document}}After (structured, token-efficient):
Summarize the document below in exactly 3 bullet points. Each bullet must be one sentence and start with an action verb. Do not include opinions or information not present in the document.
Document:
{{document}}
Summary:When delivering prompt work, provide:
Reference files cover major prompting techniques (zero-shot, few-shot, CoT, ReAct, tree-of-thoughts), structured output patterns (JSON mode, function calling), context management (attention budgets, degradation mitigation, optimization), and model-specific guidance for GPT-4, Claude, and Gemini families. Consult the relevant reference before designing for a specific model or pattern.
5b76101
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.