Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks
42
30%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/antigravity-agent-evaluation/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (LLM agent testing/benchmarking) and lists relevant subcategories, but it reads more like a topic summary than an actionable skill description. It lacks a 'Use when...' clause, concrete actions the skill performs, and the trailing statistical claim ('less than 50% on real-world benchmarks') adds no functional value for skill selection.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks to evaluate, benchmark, or test an LLM agent's performance, reliability, or capabilities.'
Replace category labels with concrete actions, e.g., 'Generates behavioral test suites, runs capability benchmarks, calculates reliability metrics, and sets up production monitoring for LLM agents.'
Remove the editorial claim about '50% on real-world benchmarks' and instead add common user trigger terms like 'eval', 'agent evaluation', 'test suite', 'accuracy', 'performance testing'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (LLM agent testing/benchmarking) and lists some action areas (behavioral testing, capability assessment, reliability metrics, production monitoring), but these are more like categories than concrete actions. It doesn't specify what the skill actually does (e.g., 'generates test suites', 'runs benchmarks', 'produces reports'). | 2 / 3 |
Completeness | The description addresses 'what' at a high level but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also somewhat vague, bringing this to 1. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'LLM agents', 'benchmarking', 'behavioral testing', 'reliability metrics', and 'production monitoring', which are reasonably natural. However, it misses common user variations like 'evaluate', 'eval', 'test suite', 'agent evaluation', 'accuracy', or 'performance testing'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on LLM agent testing and benchmarking is a fairly specific niche, but terms like 'testing', 'monitoring', and 'metrics' could overlap with general software testing or monitoring skills. The added detail about 'less than 50% on real-world benchmarks' is a factoid rather than a disambiguating trigger. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is extremely verbose, containing ~700+ lines of illustrative but non-executable TypeScript pseudocode that explains testing concepts Claude already understands. While it covers important patterns (statistical evaluation, behavioral contracts, adversarial testing, regression testing, flaky test handling), the code is not actionable—relying on undefined types and unimplemented methods. The content desperately needs to be split across multiple files with a concise overview in SKILL.md.
Suggestions
Reduce SKILL.md to a concise overview (~50-80 lines) with pattern summaries and key decisions, moving detailed code examples into separate referenced files like PATTERNS.md or individual pattern files.
Replace illustrative pseudocode with either truly executable examples using real frameworks (e.g., actual Langsmith/PromptFoo integration code) or concise decision tables describing when to use each pattern.
Remove explanations of concepts Claude already knows (confidence intervals, Jaccard similarity, chi-squared tests) and focus on agent-evaluation-specific decisions and gotchas.
Add explicit validation checkpoints to workflows—e.g., 'Before deploying: verify no critical regressions (p < 0.05), confirm no data leakage, validate adversarial pass rate > 70%'.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~700+ lines. Massive code blocks explain concepts Claude already knows (statistical testing, chi-squared tests, Jaccard similarity). The interfaces, classes, and helper methods are illustrative pseudocode that could be condensed to patterns and key decisions. Sections like 'What is a PDF' equivalent explanations of basic testing concepts waste tokens. | 1 / 3 |
Actionability | The code examples are TypeScript-like but not truly executable—they reference undefined types (Agent, AgentOutput, AgentContext, TestCase), unimplemented helper methods (containsRudeLanguage, isRelevantToCustomerService, similarity), and abstract interfaces. They illustrate patterns but aren't copy-paste ready. No concrete tool commands or real framework integration examples are provided. | 2 / 3 |
Workflow Clarity | The Collaboration section has brief numbered workflows (design → create suite → implement → evaluate → iterate), but these are high-level and lack validation checkpoints. The patterns themselves show logical sequences within code but don't provide explicit step-by-step operational workflows with validation gates. The 'Sharp Edges' section identifies failure modes well but fixes are more code patterns than actionable workflows. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files despite being extremely long. All patterns, sharp edges, and collaboration details are inlined. There are no bundle files, yet the content is far too long to be a single SKILL.md—it should split detailed patterns into separate reference files. The structure exists (headers) but content is not appropriately distributed. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (1136 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
431bfad
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.