CtrlK
BlogDocsLog inGet started
Tessl Logo

anthropic-evaluations

This skill should be used when the user asks to "create evals", "evaluate an agent", "build evaluation suite", or mentions agent testing, graders, or benchmarks. Also suggest when building coding agents, conversational agents, or research agents that need quality assurance.

18

Does it follow best practices?

Validation for skill structure

Validation failed for this skill
This skill has errors that need to be fixed before it can move to Implementation and Discovery review.
SKILL.md
Review
Evals

Anthropic Evaluations

Build rigorous evaluations for AI agents using Anthropic's proven patterns.

Quick Reference

You MUST read the reference files for detailed guidance:

  • Grader Types - Code-based, model-based, human graders
  • Agent Type Patterns - Coding, conversational, research, computer use
  • Roadmap - Steps 0-8 for building evals from scratch
  • Frameworks - Harbor, Promptfoo, Braintrust, etc.

YAML Templates:

Annotated Examples:

Core Definitions

TermDefinition
TaskSingle test with defined inputs and success criteria
TrialOne attempt at a task (run multiple for consistency)
GraderLogic that scores agent performance; tasks can have multiple
TranscriptComplete record of a trial (outputs, tool calls, reasoning)
OutcomeFinal state in environment (not just what agent said)
Evaluation harnessInfrastructure that runs evals end-to-end
Agent harnessSystem enabling model to act as agent (scaffold)
Evaluation suiteCollection of tasks measuring specific capabilities

Grader Types (Quick Reference)

TypeMethodsBest For
Code-basedString match, unit tests, static analysis, state checksFast, cheap, objective verification
Model-basedRubric scoring, assertions, pairwise comparisonNuanced, open-ended tasks
HumanSME review, A/B testing, spot-check samplingGold standard calibration

See Grader Types for detailed comparison.

Capability vs Regression Evals

TypeQuestionTarget Pass Rate
Capability"What can this agent do well?"Start low, hill-climb
Regression"Does it still handle what it used to?"Near 100%

Capability evals with high pass rates "graduate" to regression suites.

Non-Determinism Metrics

MetricMeasuresUse When
pass@kAt least 1 success in k attemptsOne success matters (coding)
pass^kAll k attempts succeedConsistency essential (customer-facing)

Example: 75% per-trial success rate

  • pass@3 ≈ 98% (likely to get at least one)
  • pass^3 ≈ 42% (0.75³ all succeed)

Tracked Metrics

tracked_metrics:
  - type: transcript
    metrics: [n_turns, n_toolcalls, n_total_tokens]
  - type: latency
    metrics: [time_to_first_token, output_tokens_per_sec, time_to_last_token]

Attribution

Based on Demystifying evals for AI agents by Anthropic (January 2026).

Repository
dwmkerr/claude-toolkit
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.