CtrlK
BlogDocsLog inGet started
Tessl Logo

prompt-engineering-patterns

Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, improving LLM outputs, or designing production prompt templates.

67

1.69x
Quality

54%

Does it follow best practices?

Impact

83%

1.69x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/llm-application-dev/skills/prompt-engineering-patterns/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a clear structure with both 'what' and 'when' clauses, which is good for completeness. However, it leans toward buzzword-heavy language ('Master advanced', 'maximize', 'reliability and controllability') rather than listing concrete, specific actions. The trigger terms cover the basics but miss many natural variations users might employ when seeking prompt engineering help.

Suggestions

Replace vague phrases like 'maximize LLM performance, reliability, and controllability' with specific concrete actions such as 'design system prompts, structure few-shot examples, implement chain-of-thought reasoning, reduce hallucinations'.

Expand trigger terms in the 'Use when' clause to include natural user phrases like 'system prompt', 'few-shot examples', 'chain of thought', 'prompt template', 'model instructions', 'reduce hallucinations'.

Remove the imperative 'Master' framing and use third-person declarative voice, e.g., 'Teaches advanced prompt engineering techniques...' or 'Applies advanced prompt engineering techniques...'

DimensionReasoningScore

Specificity

Names the domain ('prompt engineering') and mentions some actions ('optimizing prompts', 'improving LLM outputs', 'designing production prompt templates'), but these are fairly high-level and not concrete specific actions like 'chain-of-thought structuring, few-shot example selection, system prompt design'.

2 / 3

Completeness

Clearly answers both 'what' (master advanced prompt engineering techniques for LLM performance, reliability, controllability) and 'when' (explicit 'Use when optimizing prompts, improving LLM outputs, or designing production prompt templates').

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'prompt engineering', 'LLM', 'prompts', 'production prompt templates', but misses many natural user terms like 'system prompt', 'few-shot', 'chain of thought', 'prompt template', 'AI instructions', 'model output quality', or 'prompt optimization'.

2 / 3

Distinctiveness Conflict Risk

The domain of 'prompt engineering' is reasonably specific, but phrases like 'improving LLM outputs' and 'maximize LLM performance' are broad enough to potentially overlap with skills related to general LLM usage, AI coding assistants, or model evaluation.

2 / 3

Total

9

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides excellent, executable code examples across many prompt engineering patterns, but suffers severely from verbosity and poor organization. It explains many concepts Claude already knows (best practices, common pitfalls, success metrics), inflating token usage without adding value. The content would benefit enormously from being split into a concise overview with references to detailed pattern files.

Suggestions

Remove or drastically reduce the 'When to Use This Skill', 'Core Capabilities', 'Best Practices', 'Common Pitfalls', and 'Success Metrics' sections — these describe concepts Claude already knows and waste tokens.

Split the monolithic file into a brief SKILL.md overview with links to separate files per pattern (e.g., STRUCTURED_OUTPUT.md, CHAIN_OF_THOUGHT.md, FEW_SHOT.md).

Add a decision workflow: 'Start with simple prompt → if inconsistent, add constraints → if still failing, add CoT → if parsing needed, add structured output' with explicit validation criteria at each step.

Cut the Quick Start section to just one minimal example and move the remaining 6 patterns to referenced files.

DimensionReasoningScore

Conciseness

This skill is extremely verbose at ~350+ lines, with extensive sections explaining concepts Claude already knows well (prompt engineering patterns, best practices, common pitfalls, success metrics). The 'When to Use This Skill' and 'Core Capabilities' sections are pure description that add no actionable value. Lists like 'Best Practices' and 'Common Pitfalls' are generic advice Claude inherently understands.

1 / 3

Actionability

The skill provides numerous fully executable Python code examples with real libraries (langchain, anthropic, pydantic), concrete patterns with copy-paste ready implementations, and specific schemas. The code examples are complete and functional.

3 / 3

Workflow Clarity

While individual patterns are clear, there's no overarching workflow for when to apply which pattern or how to iterate through prompt optimization. The 'Progressive Disclosure' pattern (Pattern 4) shows escalation levels but lacks explicit validation checkpoints or decision criteria for moving between levels. The iterative refinement process mentioned in Core Capabilities is never actually detailed.

2 / 3

Progressive Disclosure

Everything is crammed into a single monolithic file with no references to external files. The content spans many distinct topics (few-shot, CoT, structured outputs, RAG, caching, system prompts) that should be split into separate reference files. The document is a wall of code blocks and lists with no clear navigation structure.

1 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.