CtrlK
BlogDocsLog inGet started
Tessl Logo

senior-prompt-engineer

World-class prompt engineering skill for LLM optimization, prompt patterns, structured outputs, and AI product development. Expertise in Claude, GPT-4, prompt design patterns, few-shot learning, chain-of-thought, and AI evaluation. Includes RAG optimization, agent design, and LLM system architecture. Use when building AI products, optimizing LLM performance, designing agentic systems, or implementing advanced prompting techniques.

55

Quality

44%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/senior-prompt-engineer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has strong trigger term coverage and explicit 'Use when' guidance, making it functional for skill selection. However, it reads more like a resume or expertise listing than a concrete capability description — it lists topics and buzzwords rather than specific actions the skill performs. The extremely broad scope covering nearly all LLM-related tasks creates potential overlap with more specialized skills.

Suggestions

Replace expertise-listing language ('World-class prompt engineering skill', 'Expertise in') with concrete action verbs describing what the skill does (e.g., 'Rewrites and optimizes prompts for LLM systems, designs system prompts, creates evaluation rubrics').

Narrow the scope or organize sub-capabilities more clearly to reduce conflict risk with potential specialized skills for RAG, agent design, or evaluation.

DimensionReasoningScore

Specificity

The description names the domain (prompt engineering, LLM optimization) and lists several topic areas (prompt patterns, structured outputs, RAG, agent design), but these read more like a list of buzzwords and expertise areas than concrete actions. It lacks specific verbs describing what the skill actually does (e.g., 'rewrites prompts', 'generates evaluation rubrics', 'designs system prompts').

2 / 3

Completeness

The description answers both 'what' (prompt engineering, LLM optimization, prompt patterns, structured outputs, RAG, agent design, etc.) and 'when' with an explicit 'Use when...' clause covering building AI products, optimizing LLM performance, designing agentic systems, or implementing advanced prompting techniques.

3 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'prompt engineering', 'LLM', 'Claude', 'GPT-4', 'few-shot learning', 'chain-of-thought', 'RAG', 'agentic systems', 'AI products', 'prompting techniques'. These are terms users would naturally use when seeking help in this domain.

3 / 3

Distinctiveness Conflict Risk

While the prompt engineering niche is somewhat specific, the description is extremely broad within AI/LLM space — covering everything from prompt design to RAG to agent architecture to AI product development. It could easily conflict with more specialized skills for RAG, agent design, or AI evaluation if those existed separately.

2 / 3

Total

10

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is fundamentally misaligned with its stated purpose. Despite claiming to be about prompt engineering, it contains zero actual prompt engineering content—no prompt patterns, no examples, no concrete techniques for few-shot learning, chain-of-thought, or structured outputs. Instead, it's filled with generic senior engineering platitudes, technology listings, and abstract bullet points that Claude already knows. The content reads like a job description rather than an actionable skill.

Suggestions

Replace the generic content with actual prompt engineering patterns: include concrete examples of few-shot prompts, chain-of-thought templates, structured output schemas, and before/after optimization examples.

Add executable, copy-paste-ready prompt templates with specific input/output examples (e.g., a chain-of-thought prompt for reasoning tasks with expected output format).

Define clear workflows for common tasks like 'optimizing a prompt for accuracy' or 'designing a RAG pipeline' with explicit validation steps (e.g., evaluate with metrics, compare A/B results, iterate).

Remove all generic content Claude already knows: leadership advice, basic DevOps practices, tech stack listings, and security bullet points. Focus exclusively on novel prompt engineering knowledge.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive content Claude already knows (what TDD is, how to mentor, basic DevOps concepts, generic best practices). The 'Senior-Level Responsibilities' section is entirely generic leadership advice. Tech stack listings, performance targets, and security bullet points add no actionable prompt engineering knowledge.

1 / 3

Actionability

Despite being titled 'Senior Prompt Engineer,' there is zero concrete prompt engineering guidance—no actual prompt patterns, no examples of few-shot learning, chain-of-thought, or structured outputs. The bash commands reference scripts that don't exist with no explanation. Everything is abstract bullet points rather than executable instructions.

1 / 3

Workflow Clarity

No multi-step workflows are defined for any prompt engineering task. The 'Production Patterns' are just bullet-point lists of concepts with no sequencing, validation steps, or feedback loops. There is no clear process for optimizing a prompt, evaluating LLM output, or building an agentic system.

1 / 3

Progressive Disclosure

The skill does reference external files (references/prompt_engineering_patterns.md, etc.) which is appropriate structure, and the references are one level deep. However, the main file itself is a wall of generic content that should either be cut or moved, and the referenced files' descriptions are vague bullet points that don't help navigation.

2 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
sc30gsw/claude-code-customes
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.