CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-evaluation

Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent.

Install with Tessl CLI

npx tessl i github:Dokhacgiakhoa/antigravity-ide --skill agent-evaluation
What are skills?

Overall
score

62%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Agent Evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it

Capabilities

  • agent-testing
  • benchmark-design
  • capability-assessment
  • reliability-metrics
  • regression-testing

Requirements

  • testing-fundamentals
  • llm-fundamentals

Patterns

🧠 Knowledge Modules (Fractal Skills)

1. Statistical Test Evaluation

2. Behavioral Contract Testing

3. Adversarial Testing

4. ❌ Single-Run Testing

5. ❌ Only Happy Path Tests

6. ❌ Output String Matching

Repository
github.com/Dokhacgiakhoa/antigravity-ide
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.