Audit existing skills with Tessl scoring, metadata and trigger-coverage checks, repo conventions, and skill-authoring best practices. Use when creating or revising a skill, triaging weak self-activation, or comparing a skill against source-repo guidance such as `AGENTS.md`, `CLAUDE.md`, or repo rules, plus external skill guidance. Do not use to verify general application code or to rewrite unrelated docs.
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Use this scorecard after Tessl so the audit stays evidence-based and repo-aware.
Treat these as must-fix before calling the skill ready:
name or descriptiondescription fails discovery because it does not say what the skill does and when to use itThese usually lower trust or activation even if the skill technically works:
name is vague, generic, or forgettableSKILL.md is bloated with detail that belongs in references/These are worth tightening after the blockers and majors:
SKILL.mdagents/openai.yaml lags behind the skill's current wordingScore each dimension qualitatively as strong, mixed, or weak:
scope audited:
Tessl:
strengths:
findings:
recommended changes:
rerun: