Generate eval scenarios from repo commits, configure multi-agent runs, execute baseline + with-context evals, and compare results — the full setup pipeline before improvement begins
Overall
score
90%
Does it follow best practices?
Validation for skill structure
{
"context": "Testing whether an agent correctly handles downloading scenarios when multiple commits were passed to tessl scenario generate — where --last only downloads the most recent generation and specific IDs are needed for each commit.",
"type": "weighted_checklist",
"checklist": [
{
"name": "does_not_use_last_only",
"description": "The agent does NOT simply run `tessl scenario download --last`, which would only download scenarios from one of the two commits.",
"max_score": 3
},
{
"name": "finds_generation_ids",
"description": "The agent either uses `tessl scenario list` to find the generation IDs, or instructs the user to use the Scenario IDs shown in the generate output.",
"max_score": 2
},
{
"name": "downloads_each_separately",
"description": "The agent downloads scenarios for each commit separately using specific generation IDs (e.g., `tessl scenario download <id1>` then `tessl scenario download <id2>`).",
"max_score": 3
},
{
"name": "explains_why",
"description": "The agent explains that each commit produced its own generation with its own ID, and `--last` only gets the most recent one.",
"max_score": 2
}
]
}