CtrlK
BlogDocsLog inGet started
Tessl Logo

evaluation

This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge, multi-dimensional evaluation, agent testing, or quality gates for agent pipelines.

62

Quality

53%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/evaluation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at trigger term coverage and distinctiveness, providing a clear niche around agent evaluation and testing frameworks. However, it is essentially all 'when to use' with no 'what it does' — it never describes the concrete actions or outputs the skill performs, which is a significant gap in completeness and specificity.

Suggestions

Add a leading sentence describing what the skill concretely does, e.g., 'Builds evaluation frameworks for AI agents, creates scoring rubrics, implements LLM-as-judge pipelines, and defines quality gates for agent outputs.'

Restructure to separate the 'what' (capabilities) from the 'when' (trigger guidance) — start with specific actions/outputs, then follow with 'Use when...' clause.

DimensionReasoningScore

Specificity

The description names the domain (agent evaluation/testing) and mentions some actions like 'build test framework', 'measure agent quality', 'create evaluation rubrics', but these are mostly listed as trigger phrases rather than concrete capability descriptions. It lacks a clear 'what it does' statement listing specific actions the skill performs.

2 / 3

Completeness

The 'when' is explicitly and thoroughly covered with trigger phrases and use cases. However, the 'what does this do' is essentially missing — the description never states what the skill actually does or produces, only when it should be triggered. This inverts the typical problem but still leaves completeness lacking.

2 / 3

Trigger Term Quality

Strong coverage of natural trigger terms: 'evaluate agent performance', 'build test framework', 'measure agent quality', 'create evaluation rubrics', 'LLM-as-judge', 'multi-dimensional evaluation', 'agent testing', 'quality gates for agent pipelines'. These are terms users would naturally use when seeking this kind of help.

3 / 3

Distinctiveness Conflict Risk

The description targets a very specific niche — agent evaluation, LLM-as-judge patterns, quality gates for agent pipelines — which is unlikely to conflict with other skills. The trigger terms are domain-specific and distinctive.

3 / 3

Total

10

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in coverage but severely over-verbose, explaining many concepts Claude already understands (non-determinism, why multi-dimensional scoring matters, what edge cases are). The code examples are pseudocode with undefined functions rather than executable implementations. The content would benefit greatly from being cut by 60%+ and replacing conceptual explanations with concrete, executable examples and templates.

Suggestions

Cut explanatory prose by at least 60% - remove all 'because' clauses explaining why concepts matter (e.g., why non-determinism matters, why multi-dimensional scoring is better) and trust Claude to understand these fundamentals.

Replace pseudocode examples with fully executable code - provide a complete, runnable LLM-as-judge evaluation function with an actual prompt template, scoring logic, and structured output parsing.

Move detailed sections (rubric design, test set design, performance drivers table) into separate reference files and link to them from a lean overview, reducing the main skill to under 100 lines.

Add explicit validation checkpoints to the framework-building workflow, such as 'Run rubric on 5 known-quality examples and verify scores match expectations before scaling to full test set.'

DimensionReasoningScore

Conciseness

Extremely verbose at ~250+ lines. Extensively explains concepts Claude already knows (what non-determinism is, why agents differ from traditional software, what rubrics are, why multi-dimensional scoring matters). The BrowseComp table and many explanatory paragraphs add bulk without actionable value. Most sections describe 'why' at length rather than providing lean 'how' instructions.

1 / 3

Actionability

Provides some concrete guidance (code examples for evaluation function and test set structure), but the code examples use undefined functions (load_rubric, assess_dimension, weighted_average) making them pseudocode rather than executable. Most content is conceptual guidance ('build rubrics', 'stratify by complexity') rather than copy-paste ready implementations.

2 / 3

Workflow Clarity

The 'Building Evaluation Frameworks' section provides a clear 8-step sequence, which is good. However, it lacks explicit validation checkpoints or feedback loops (e.g., no 'verify your rubric produces consistent scores before proceeding' step). For a skill involving building evaluation pipelines that gate deployments, the absence of verification steps between stages is a gap.

2 / 3

Progressive Disclosure

References a metrics reference file and lists related skills, which is good. However, the skill itself is monolithic with enormous inline content that could be split into separate reference files (rubric design details, test set design, LLM-as-judge prompt templates). The integration section lists connections but doesn't provide navigable links. Much content that should be in referenced files is inline.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
muratcankoylan/Agent-Skills-for-Context-Engineering
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.