Draft changelog PRs by collecting GitHub/Slack/Git changes, formatting with templates, running quality gates, and preparing a branch/PR. Use when generating weekly/monthly release notes or when the user asks to create a changelog from recent merges. Trigger with "changelog weekly", "generate release notes", "draft changelog", "create changelog PR".
76
72%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/automation/mattyp-changelog/skills/changelog-orchestrator/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates specific capabilities (collecting changes from multiple sources, formatting, quality gates, PR preparation), provides explicit 'Use when' guidance with temporal context (weekly/monthly), and includes concrete trigger phrases. It uses proper third-person voice throughout and is concise without being vague.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: collecting GitHub/Slack/Git changes, formatting with templates, running quality gates, and preparing a branch/PR. These are detailed, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (draft changelog PRs by collecting changes, formatting, running quality gates, preparing PR) and 'when' (generating weekly/monthly release notes, creating changelog from recent merges) with explicit trigger phrases. | 3 / 3 |
Trigger Term Quality | Includes excellent natural trigger terms users would actually say: 'changelog weekly', 'generate release notes', 'draft changelog', 'create changelog PR', plus contextual terms like 'recent merges', 'weekly/monthly release notes'. Good coverage of variations. | 3 / 3 |
Distinctiveness Conflict Risk | Occupies a clear niche around changelog/release note generation with distinct triggers. Unlikely to conflict with general Git, PR review, or documentation skills due to the specific 'changelog' and 'release notes' focus. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
44%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill has a solid structural approach with good progressive disclosure to reference files, but suffers significantly from broken step numbering that makes the workflow confusing and ambiguous. Actionability is weakened by missing concrete command invocations and incomplete conditional logic (the quality gate failure path is truncated). The content would benefit from fixing the numbered list structure and adding explicit commands with arguments.
Suggestions
Fix the broken numbered list nesting — use a single sequential numbering (1-9) or properly indent sub-steps so the workflow sequence is unambiguous.
Complete the quality gate failure path: currently 'If score is below threshold:' has no sub-steps — add explicit fix/retry instructions.
Add concrete command invocations with arguments for each script (e.g., `python ${CLAUDE_SKILL_DIR}/scripts/validate_config.py .changelog-config.json`) rather than just naming the scripts.
Include at least one inline example showing a minimal config and expected changelog output, rather than deferring all examples to a reference file.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Mostly efficient but includes some unnecessary sections like the Prerequisites section explaining basic requirements Claude can infer. The Resources section at the end partially duplicates script references already mentioned in the workflow steps. | 2 / 3 |
Actionability | References specific scripts and paths which is good, but the workflow steps lack concrete executable commands (e.g., how to invoke the scripts with what arguments). Key details like how to 'decide date range' or what the quality threshold is are missing. No example invocations or expected outputs are shown inline. | 2 / 3 |
Workflow Clarity | The numbered list has broken nesting — steps restart numbering multiple times (1,2,3 then 1,2,3 then 1,2 then 1,2,3), making the sequence confusing and ambiguous. Critical validation/feedback loops are incomplete: the quality score check says 'if score is below threshold' but never states what to do (the sub-steps are missing). There's no explicit retry/fix loop for failed quality gates. | 1 / 3 |
Progressive Disclosure | Good structure with a concise overview pointing to one-level-deep references (implementation.md, errors.md, examples.md). Content is appropriately split between the main skill file and reference documents, with clear navigation signals. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
70e9fa4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.