CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl-labs/review-model-performance

Run task evals across multiple Claude models, compare results side-by-side, and identify which skill gaps are model-specific versus universal

96

1.65x

Quality

97%

Does it follow best practices?

Impact

96%

1.65x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

rubric.jsonevals/scenario-3/

{
  "context": "Tests whether the agent correctly analyzes multi-model eval results to produce a structured comparison report, including all required tables with correct formatting, symbol thresholds, pattern classification (A/B/C/D), baseline interpretation, and publishing recommendations.",
  "type": "weighted_checklist",
  "checklist": [
    {
      "name": "Overall summary table",
      "description": "Report contains an overall summary table with columns for Model, Without Skill score, With Skill score, and Delta for each of the three models",
      "max_score": 10
    },
    {
      "name": "Per-scenario breakdown",
      "description": "Report contains a per-scenario breakdown showing scores for each model (haiku, sonnet, opus) for each scenario individually",
      "max_score": 8
    },
    {
      "name": "Per-criterion table",
      "description": "Report contains a per-criterion breakdown table showing scores per model for each individual criterion",
      "max_score": 8
    },
    {
      "name": "Correct symbol thresholds",
      "description": "Symbol legend or usage applies: ✅ for >= 80%, 🟡 for >= 50% and < 80%, 🔴 for < 50% — and symbols are consistently applied to criterion scores in the per-criterion table",
      "max_score": 10
    },
    {
      "name": "Baseline interpretation",
      "description": "Report explicitly interprets what the baseline (without-skill) scores reveal — at minimum distinguishes between high-baseline and low-baseline scenarios, or notes variable baselines across models",
      "max_score": 8
    },
    {
      "name": "Pattern A identified",
      "description": "Report identifies at least one criterion where ALL models score poorly (< 50%) and labels it or describes it as a universal failure / tile gap (Pattern A)",
      "max_score": 10
    },
    {
      "name": "Pattern B identified",
      "description": "Report identifies at least one criterion where haiku scores poorly but sonnet and/or opus score well, and labels it or describes it as a capability gradient (Pattern B)",
      "max_score": 10
    },
    {
      "name": "Pattern D identified",
      "description": "Report identifies at least one scenario where the without-skill score exceeds the with-skill score for a model and labels it or describes it as a regression (Pattern D)",
      "max_score": 10
    },
    {
      "name": "Fix before publish recommendation",
      "description": "Report recommends fixing issues before publishing (does NOT recommend publishing) given the regressions present in the data",
      "max_score": 8
    },
    {
      "name": "eval-improve mentioned",
      "description": "Report mentions tessl-labs/eval-improve or the eval-improve tool as the suggested next step for addressing the regressions",
      "max_score": 8
    },
    {
      "name": "Re-run offer",
      "description": "Report offers or suggests re-running the comparison after fixes are applied to verify improvement",
      "max_score": 10
    }
  ]
}

evals

tile.json