Optimizes AI skills for activation, clarity, and cross-model reliability. Use when creating or editing skill packs, diagnosing weak skill uptake, reducing regressions, tuning instruction salience, improving examples, shrinking context cost, or setting benchmark and release gates for skills. Trigger terms: skill optimization, activation gap, benchmark skill, with/without skill delta, regression, context budget, prompt salience.
87
87%
Does it follow best practices?
Impact
87%
1.14xAverage score across 5 eval scenarios
Passed
No known issues
The infrastructure team at Stackwell recently shipped an update to their terraform-modules skill, which guides models in writing reusable Terraform modules. After the update, eval scores on two key scenarios dropped significantly compared to running without the skill. The team is baffled — they added more detail to the skill, but models are now performing worse on exactly the scenarios the new content was meant to improve.
You've been brought in to investigate and fix the regression. The raw before/after benchmark data and the current version of the skill are provided below. Your job is to figure out what in the skill is causing the regression, fix it, and document your analysis so the team understands what went wrong.
Produce:
SKILL-fixed.md — the corrected version of the skillregression-report.md — your analysis of what went wrong in the skill update, including which scenarios got worse, what in the skill you believe caused it, and how you addressed itThe following files are provided as inputs. Extract them before beginning.
=============== FILE: inputs/benchmark-before.json =============== { "skill_version": "v1.2", "run_date": "2026-03-01", "models": ["ModelA", "ModelB"], "results": { "ModelA": { "module-outputs-scenario": { "without": 70, "with": 85 }, "variable-validation-scenario": { "without": 60, "with": 75 }, "provider-config-scenario": { "without": 80, "with": 88 } }, "ModelB": { "module-outputs-scenario": { "without": 65, "with": 78 }, "variable-validation-scenario": { "without": 58, "with": 70 }, "provider-config-scenario": { "without": 75, "with": 83 } } } }
=============== FILE: inputs/benchmark-after.json =============== { "skill_version": "v1.3", "run_date": "2026-04-01", "models": ["ModelA", "ModelB"], "results": { "ModelA": { "module-outputs-scenario": { "without": 70, "with": 62 }, "variable-validation-scenario": { "without": 60, "with": 55 }, "provider-config-scenario": { "without": 80, "with": 90 } }, "ModelB": { "module-outputs-scenario": { "without": 65, "with": 59 }, "variable-validation-scenario": { "without": 58, "with": 52 }, "provider-config-scenario": { "without": 75, "with": 85 } } } }
Use when: Terraform module, reusable infrastructure, .tf files, outputs.tf, variables.tf, provider config.
variables.tf, outputs.tf, and main.tfdescription fieldYou may want to add validation blocks to variables when you think it's relevant. Sometimes validation adds complexity that isn't worth it, especially for simple string variables. Consider adding validation if the variable has a restricted set of values.
variable "environment" {
type = string
description = "Deployment environment"
# validation might be nice here
}Outputs should generally follow this pattern, though feel free to adapt as needed:
output "instance_id" {
value = aws_instance.main.id
description = "optional description here"
}Note: descriptions on outputs are nice to have but not strictly required. The description field can often be skipped if the output name is self-explanatory.
Always specify provider version constraints in versions.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}Do not omit the source field. Do not omit the version constraint.