CtrlK
BlogDocsLog inGet started
Tessl Logo

senior-prompt-engineer

This skill should be used when the user asks to "optimize prompts", "design prompt templates", "evaluate LLM outputs", "build agentic systems", "implement RAG", "create few-shot examples", "analyze token usage", or "design AI workflows". Use for prompt engineering patterns, LLM evaluation frameworks, agent architectures, and structured output design.

76

1.18x
Quality

53%

Does it follow best practices?

Impact

91%

1.18x

Average score across 6 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./engineering-team/senior-prompt-engineer/SKILL.md
SKILL.md
Quality
Evals
Security

Evaluation results

95%

6%

Customer Support Prompt Cost Reduction

Prompt optimization workflow

Criteria
Without context
With context

Baseline analysis run

91%

100%

Baseline saved to file

100%

100%

GPT-4 model specified

100%

100%

Optimized version produced

100%

100%

Optimizer script used for optimization

25%

37%

Comparison report generated

100%

100%

Compare flag used

60%

100%

Issue identification step

100%

100%

Optimization pattern applied

100%

100%

Workflow log documents steps

100%

100%

100%

Knowledge Base Q&A System Quality Audit

RAG system quality evaluation

Criteria
Without context
With context

rag_evaluator.py used

100%

100%

JSON output file produced

100%

100%

Verbose flag used

100%

100%

Output flag used

100%

100%

Retrieval metrics present

100%

100%

Generation metrics present

100%

100%

Recommendations documented

100%

100%

Evaluation commands logged

100%

100%

Issues section in report

100%

100%

100%

10%

Research Agent Architecture Review

Agent workflow design and documentation

Criteria
Without context
With context

--validate flag used

100%

100%

ASCII visualize flag used

100%

100%

Mermaid format flag used

100%

100%

--format mermaid flag used

100%

100%

--estimate-cost flag used

100%

100%

--runs 50 specified

100%

100%

All three analyses referenced in summary

100%

100%

Script output used (not reimplemented)

100%

100%

--output flag used

0%

100%

88%

22%

Shipping Complaint Extraction Prompt

Few-shot example design workflow

Criteria
Without context
With context

Example count 3-5

100%

100%

Simple/typical case

100%

100%

Diverse example types

80%

100%

Consistent Input/Output format

66%

100%

Task description first

100%

100%

Output format specified

100%

100%

--extract-examples used

0%

100%

--validate-examples attempted

0%

0%

examples.json produced

100%

100%

85%

20%

Contract Metadata Extraction Prompt

Structured output prompt design workflow

Criteria
Without context
With context

Schema with typed fields

100%

100%

Schema is valid JSON

100%

100%

Schema in prompt

100%

100%

ONLY JSON enforcement

100%

100%

Start/end delimiter constraint

0%

100%

No markdown/explanation

100%

100%

--validate-schema attempted

0%

0%

--analyze run on prompt

0%

100%

Required fields marked

100%

100%

80%

26%

Prompt Audit and Strategy Recommendation

Token cost estimation and pattern selection

Criteria
Without context
With context

--tokens --model gpt-4 used

0%

0%

--json flag used

0%

100%

Token count stored in file

70%

100%

--analyze run on prompt

0%

100%

Two or more named patterns

100%

100%

Pattern recommendation given

100%

100%

Cost or token reasoning

100%

100%

Issues identified from analysis

70%

100%

Repository
alirezarezvani/claude-skills
Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.