CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

agent-evaluation

tessl i github:sickn33/antigravity-awesome-skills --skill agent-evaluation

Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent.

60%

Overall

SKILL.md
Review
Evals

Validation

69%
CriteriaDescriptionResult

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

body_examples

No examples detected (no code fences and no 'Example' wording)

Warning

body_steps

No step-by-step structure detected (no ordered list); consider adding a simple workflow

Warning

Total

11

/

16

Passed

Implementation

22%

This skill is essentially a skeleton or outline rather than actionable guidance. It identifies important concepts in agent evaluation (statistical testing, behavioral contracts, adversarial testing) but provides no concrete implementation details, code examples, or actual solutions. The sharp edges table is particularly problematic with placeholder comments instead of real content.

Suggestions

Add executable code examples for each pattern (e.g., a Python snippet showing how to run statistical test evaluation with multiple runs and confidence intervals)

Replace placeholder comments in the Sharp Edges table with actual solutions and code snippets

Add a concrete workflow section showing the sequence of steps for evaluating an agent, including validation checkpoints

Expand anti-patterns with specific examples of what bad code/approaches look like versus the correct approach

DimensionReasoningScore

Conciseness

The content is relatively brief but includes some unnecessary narrative framing ('You're a quality engineer who has seen agents...') that doesn't add actionable value. The capabilities/requirements lists are efficient but the sharp edges table has placeholder comments instead of actual solutions.

2 / 3

Actionability

The skill provides no concrete code, commands, or executable examples. Patterns like 'Statistical Test Evaluation' and 'Behavioral Contract Testing' are named but not demonstrated. The sharp edges table contains only placeholder comments ('// Bridge benchmark and production evaluation') instead of actual solutions.

1 / 3

Workflow Clarity

There is no clear workflow or sequence of steps for evaluating agents. The content lists concepts (patterns, anti-patterns) but provides no guidance on how to actually implement an evaluation process, what order to follow, or how to validate results.

1 / 3

Progressive Disclosure

The content has reasonable section structure (Capabilities, Requirements, Patterns, Anti-Patterns, Sharp Edges, Related Skills) but the sections are mostly empty shells. References to related skills exist but the core content that should be present in SKILL.md is missing.

2 / 3

Total

6

/

12

Passed

Activation

100%

This is a strong skill description that follows best practices. It clearly specifies what the skill does (testing and benchmarking LLM agents with four specific activities), includes an explicit 'Use when:' clause with natural trigger terms, and carves out a distinct niche that won't conflict with other skills. The description is concise yet comprehensive.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'behavioral testing, capability assessment, reliability metrics, and production monitoring'. Also includes a concrete detail about benchmark performance ('less than 50% on real-world benchmarks').

3 / 3

Completeness

Clearly answers both what (testing and benchmarking LLM agents with specific activities listed) and when (explicit 'Use when:' clause with trigger terms). The structure follows the recommended pattern.

3 / 3

Trigger Term Quality

Includes natural keywords users would say: 'agent testing', 'agent evaluation', 'benchmark agents', 'agent reliability', 'test agent'. These cover common variations of how users would phrase requests about testing LLM agents.

3 / 3

Distinctiveness Conflict Risk

Clear niche focused specifically on LLM agent testing/benchmarking. The combination of 'agent' with testing/evaluation terms creates a distinct trigger profile unlikely to conflict with general testing or general LLM skills.

3 / 3

Total

12

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.