Optimize your skills and tiles: review SKILL.md quality, generate eval scenarios, run evals, compare across models, diagnose gaps, and re-run until scores improve.
88
94%
Does it follow best practices?
Impact
88%
1.07xAverage score across 24 eval scenarios
Passed
No known issues
{
"context": "Tests whether the agent correctly analyzes multi-model eval results to produce a structured comparison report, including all required tables with correct formatting, symbol thresholds, pattern classification (Universal Failure, Capability Gradient, Regression — only patterns present in the data are tested), baseline interpretation, and publishing recommendations.",
"type": "weighted_checklist",
"checklist": [
{
"name": "Overall summary table",
"description": "Report contains an overall summary table with columns for Model, Without Skill score, With Skill score, and Delta for each of the three models",
"max_score": 10
},
{
"name": "Per-scenario breakdown",
"description": "Report contains a per-scenario breakdown showing scores for each model (haiku, sonnet, opus) for each scenario individually",
"max_score": 8
},
{
"name": "Per-criterion table",
"description": "Report contains a per-criterion breakdown table showing scores per model for each individual criterion",
"max_score": 8
},
{
"name": "Correct symbol thresholds",
"description": "Symbol legend or usage applies: ✅ for >= 80%, 🟡 for >= 50% and < 80%, 🔴 for < 50% — and symbols are consistently applied to criterion scores in the per-criterion table",
"max_score": 10
},
{
"name": "Baseline interpretation",
"description": "Report explicitly interprets what the baseline (without-skill) scores reveal — at minimum distinguishes between high-baseline and low-baseline scenarios, or notes variable baselines across models",
"max_score": 8
},
{
"name": "Universal Failure identified",
"description": "Report identifies at least one criterion where ALL models score poorly (< 50%) and labels it or describes it as a Universal Failure (tile gap)",
"max_score": 10
},
{
"name": "Capability Gradient identified",
"description": "Report identifies at least one criterion where haiku scores poorly but sonnet and/or opus score well, and labels it or describes it as a Capability Gradient",
"max_score": 10
},
{
"name": "Regression identified",
"description": "Report identifies at least one scenario where the without-skill score exceeds the with-skill score for a model and labels it or describes it as a Regression",
"max_score": 10
},
{
"name": "Fix before publish recommendation",
"description": "Report recommends fixing issues before publishing (does NOT recommend publishing) given the regressions present in the data",
"max_score": 8
},
{
"name": "eval-improve mentioned",
"description": "Report mentions eval-improve (whether as 'eval-improve skill', 'eval-improve tool', or 'tessl-labs/eval-improve') as the suggested next step for addressing the regressions",
"max_score": 8
},
{
"name": "Re-run offer",
"description": "Report offers or suggests re-running the comparison after fixes are applied to verify improvement",
"max_score": 10
}
]
}evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10
scenario-11
scenario-12
scenario-13
scenario-14
scenario-15
scenario-16
scenario-17
scenario-18
scenario-19
scenario-20
scenario-21
scenario-22
scenario-23
scenario-24
skills
compare-skill-model-performance
optimize-skill-instructions
references
optimize-skill-performance
optimize-skill-performance-and-instructions