Analyze a single GitHub issue for docker/docs — check whether the problem still exists, determine a verdict, and report findings. Use when asked to triage, assess, or review an issue, even if the user doesn't say "triage" explicitly: "triage issue 1234", "is issue 500 still valid", "should we close #200", "look at this issue", "what's going on with #200".
96
96%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly communicates what the skill does (analyze/triage a single GitHub issue for docker/docs), when to use it (with a rich set of natural trigger examples), and is scoped narrowly enough to avoid conflicts. The inclusion of multiple example phrasings in the 'Use when' clause is particularly strong for matching diverse user requests.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple concrete actions: analyze a GitHub issue, check whether the problem still exists, determine a verdict, and report findings. These are specific, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (analyze a GitHub issue for docker/docs, check if problem exists, determine verdict, report findings) and 'when' (explicit 'Use when...' clause with multiple trigger examples covering various phrasings). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including 'triage', 'assess', 'review', 'is issue still valid', 'should we close', 'look at this issue', 'what's going on with'. Also includes explicit issue number patterns like '#200' and 'issue 1234'. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — scoped to a single GitHub issue in the docker/docs repository, with a specific workflow (triage/assess/verdict). The combination of repo specificity and task specificity makes conflicts with other skills very unlikely. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
92%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a high-quality skill that provides a clear, actionable triage workflow with executable commands at every step. The decision tree in step 6 is particularly well-structured with explicit verdicts and corresponding CLI actions. The only notable weakness is the reference to CLAUDE.md content that isn't provided in the bundle, though this is a minor issue since it's a single reference to project-level context.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient throughout. It assumes Claude knows how to use gh CLI, read JSON, and navigate repos. No unnecessary explanations of concepts — every section delivers actionable instructions without padding. | 3 / 3 |
Actionability | Every step includes exact, copy-paste-ready gh CLI commands with proper flags and jq filters. The verdict section provides specific commands for each outcome (close, label, escalate). The area labels are enumerated explicitly. This is fully executable guidance. | 3 / 3 |
Workflow Clarity | The 7-step workflow is clearly sequenced with logical progression: fetch → understand → verify URLs → check repo → check upstream → decide/act → report. Step 2 includes a validation checkpoint (check for linked PRs before acting), and step 6 provides explicit decision branches with corresponding actions. The feedback loop of checking timeline cross-references before deciding prevents duplicate work. | 3 / 3 |
Progressive Disclosure | The content is well-organized with clear numbered sections and is appropriately sized for a single SKILL.md. However, it references 'the vendored content table in CLAUDE.md' without providing that file in the bundle, and the area labels list is quite long inline content that could potentially be referenced. For a skill of this complexity, the structure is reasonable but the missing CLAUDE.md reference is a gap. | 2 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c0aa985
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.