CtrlK
BlogDocsLog inGet started
Tessl Logo

prompt-engineering-patterns

Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, improving LLM outputs, or designing production prompt templates.

58

1.69x
Quality

41%

Does it follow best practices?

Impact

83%

1.69x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/llm-application-dev/skills/prompt-engineering-patterns/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a solid structure with an explicit 'Use when' clause, which is good for completeness. However, it leans toward high-level buzzwords ('maximize LLM performance, reliability, and controllability') rather than listing concrete, specific techniques. The trigger terms cover the basics but miss many natural variations users might employ when seeking prompt engineering help.

Suggestions

Replace abstract claims like 'maximize LLM performance, reliability, and controllability' with specific techniques such as 'chain-of-thought prompting, few-shot examples, system prompt design, output formatting constraints'.

Expand trigger terms in the 'Use when' clause to include natural variations like 'system prompt', 'few-shot', 'chain of thought', 'prompt template', 'AI instructions', 'prompt debugging'.

DimensionReasoningScore

Specificity

The description names the domain ('prompt engineering') and mentions some actions ('optimizing prompts', 'improving LLM outputs', 'designing production prompt templates'), but these are fairly high-level and not concrete specific actions like 'chain-of-thought structuring, few-shot example selection, system prompt design'.

2 / 3

Completeness

The description clearly answers both 'what' (advanced prompt engineering techniques for LLM performance, reliability, controllability) and 'when' with an explicit 'Use when' clause covering optimizing prompts, improving LLM outputs, and designing production prompt templates.

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'prompt engineering', 'LLM', 'prompts', 'production prompt templates', but misses many natural variations users might say such as 'system prompt', 'few-shot', 'chain of thought', 'prompt template', 'AI instructions', 'Claude prompt', or 'prompt optimization'.

2 / 3

Distinctiveness Conflict Risk

While 'prompt engineering' is a recognizable niche, the broad terms like 'improving LLM outputs' and 'maximize LLM performance' could overlap with skills related to LLM evaluation, fine-tuning, or general AI development. The description could be more distinctive by specifying the exact techniques or differentiating from adjacent skills.

2 / 3

Total

9

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a verbose, encyclopedic reference on prompt engineering that explains many concepts Claude already knows. It lacks a clear workflow for iterative prompt development and dumps all content into a single monolithic file. While it contains some executable code examples, much of the content is descriptive bullet points and generic advice rather than novel, actionable guidance that would change Claude's behavior.

Suggestions

Cut the content by 60-70%: remove 'When to Use', 'Core Capabilities' descriptions, 'Best Practices', 'Common Pitfalls', and 'Success Metrics' sections entirely — Claude already knows these concepts.

Add a clear iterative workflow: define a step-by-step process for prompt development (start simple → evaluate → diagnose issues → apply specific pattern → validate improvement) with explicit checkpoints.

Split into multiple files: keep SKILL.md as a concise overview with references to separate pattern files (e.g., STRUCTURED_OUTPUT.md, FEW_SHOT.md, COT.md) for detailed code examples.

Focus on non-obvious, novel techniques: instead of explaining what chain-of-thought is, provide specific prompt snippets that solve tricky edge cases Claude wouldn't handle well by default.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Explains concepts Claude already knows well (what few-shot learning is, what chain-of-thought is, what system prompts are). Bullet-point lists like 'When to Use This Skill', 'Core Capabilities', 'Best Practices', 'Common Pitfalls', and 'Success Metrics' are largely things Claude already understands. The 'Core Capabilities' section is entirely descriptive with no actionable content, wasting tokens.

1 / 3

Actionability

Contains executable Python code examples with real libraries (LangChain, Anthropic SDK, Pydantic), which is good. However, many patterns are somewhat generic/boilerplate rather than providing novel, non-obvious techniques. The code examples are functional but some are incomplete (e.g., the CoT pattern is just a prompt template with no execution). Much of the content describes rather than instructs.

2 / 3

Workflow Clarity

There is no clear multi-step workflow for prompt engineering iteration. The skill lists patterns independently but doesn't sequence them into a coherent process (e.g., when to start simple, how to diagnose issues, when to escalate complexity). The 'Progressive Disclosure' pattern hints at a workflow but lacks validation checkpoints or decision criteria for moving between levels. No feedback loops for prompt refinement.

1 / 3

Progressive Disclosure

Monolithic wall of text with no bundle files or external references. All content is inline in a single massive file. The 'Core Capabilities' section could be separate reference files, the code patterns could be individual files, and the best practices/pitfalls could be split out. No navigation structure or cross-references exist.

1 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.