CtrlK
BlogDocsLog inGet started
Tessl Logo

senior-prompt-engineer

This skill should be used when the user asks to "optimize prompts", "design prompt templates", "evaluate LLM outputs", "build agentic systems", "implement RAG", "create few-shot examples", "analyze token usage", or "design AI workflows". Use for prompt engineering patterns, LLM evaluation frameworks, agent architectures, and structured output design.

Install with Tessl CLI

npx tessl i github:alirezarezvani/claude-skills --skill senior-prompt-engineer
What are skills?

82

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at trigger term coverage with many natural phrases users would say, and carves out a distinct niche in prompt engineering. However, it's structured backwards - leading with 'when to use' rather than 'what it does', and lacks concrete specificity about what capabilities or outputs the skill provides beyond listing task categories.

Suggestions

Add a clear opening statement describing what the skill does concretely, e.g., 'Designs and refines prompts for LLMs, creates evaluation rubrics, architects multi-agent systems, and structures RAG pipelines.'

Restructure to lead with capabilities (what) before triggers (when) for better readability and to clearly answer 'what does this do' first.

DimensionReasoningScore

Specificity

Names the domain (prompt engineering, LLM work) and lists several actions like 'optimize prompts', 'design prompt templates', 'evaluate LLM outputs', but these are more task categories than concrete specific actions. Missing details on what specific techniques or outputs are produced.

2 / 3

Completeness

Has a 'Use when' clause with trigger terms, but the 'what does this do' portion is weak - it only lists task categories without explaining what the skill actually produces or how it helps. The description focuses heavily on triggers but lacks clear capability explanation.

2 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'optimize prompts', 'design prompt templates', 'evaluate LLM outputs', 'build agentic systems', 'implement RAG', 'create few-shot examples', 'analyze token usage', 'design AI workflows'. These are realistic phrases users would naturally use.

3 / 3

Distinctiveness Conflict Risk

Clear niche in prompt engineering and LLM development. The specific terms like 'RAG', 'agentic systems', 'few-shot examples', 'token usage' create a distinct domain that wouldn't overlap with general coding or document skills.

3 / 3

Total

10

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong skill with excellent actionability and workflow clarity. The content provides concrete, executable commands with realistic example outputs and well-structured multi-step workflows with validation checkpoints. Minor verbosity in the ASCII diagrams and some example outputs could be trimmed for better token efficiency.

Suggestions

Consider removing or significantly condensing the ASCII workflow diagram, as the textual description and Mermaid export option are sufficient

Trim some of the verbose example outputs (e.g., the full RAG evaluation report) to show just the key structure while noting 'additional metrics follow'

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some unnecessary verbosity, such as the detailed ASCII workflow diagram and extensive example outputs that could be trimmed. The table of contents adds overhead for a skill that could be navigated without it.

2 / 3

Actionability

Provides fully executable bash commands with concrete example outputs showing exactly what to expect. The workflows include specific steps with actual commands, and the patterns table gives clear guidance on when to use each approach.

3 / 3

Workflow Clarity

Multi-step workflows are clearly sequenced with numbered steps, validation checkpoints (Step 4: Validate, Step 5: Compare results, Step 6: Validate with test cases), and explicit feedback loops for iterative improvement.

3 / 3

Progressive Disclosure

Well-organized with a clear overview, quick start section, and explicit references to detailed documentation files (references/prompt_engineering_patterns.md, etc.) with a helpful table explaining when to load each reference.

3 / 3

Total

11

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation13 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

body_steps

No step-by-step structure detected (no ordered list); consider adding a simple workflow

Warning

Total

13

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.