Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on re...
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill agent-evaluation52
Quality
27%
Does it follow best practices?
Impact
99%
0.99xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/agent-evaluation/SKILL.mdYou're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.
You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it
Run tests multiple times and analyze result distributions
Define and test agent behavioral invariants
Actively try to break agent behavior
| Issue | Severity | Solution |
|---|---|---|
| Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation |
| Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation |
| Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming |
| Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |
Works well with: multi-agent-orchestration, agent-communication, autonomous-agents
This skill is applicable to execute the workflow or actions described in the overview.
c864565
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.