CtrlK
BlogDocsLog inGet started
Tessl Logo

diff-analysis

Methodology for categorizing changes, assessing risks, and creating summaries from any changeset. Triggers: diff analysis, changeset review, risk assessment, change categorization, semantic analysis, release preparation, change summary, git diff Use when: analyzing specific changesets, assessing risk of changes, preparing release notes, categorizing changes by type and impact DO NOT use when: quick context catchup - use catchup instead. DO NOT use when: full PR review - use review-core with pensive skills. Use this skill for systematic change analysis with risk scoring.

85

1.05x
Quality

66%

Does it follow best practices?

Impact

98%

1.05x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/analysis-methods/diff-analysis/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description with strong completeness and distinctiveness. The explicit 'Use when' and 'DO NOT use when' clauses are particularly effective for disambiguation. The main weakness is that the core capabilities could be more concrete — the actions described (categorizing, assessing, summarizing) are somewhat generic verbs applied to the changeset domain.

Suggestions

Make capabilities more concrete by specifying outputs, e.g., 'Produces risk scores, categorizes changes by type (bugfix, feature, refactor), and generates structured summaries from diffs'.

DimensionReasoningScore

Specificity

The description names the domain (changeset analysis) and several actions (categorizing changes, assessing risks, creating summaries), but the actions remain somewhat abstract rather than listing concrete, granular operations like 'scores risk on a 1-5 scale' or 'generates release note entries'.

2 / 3

Completeness

Clearly answers both 'what' (categorizing changes, assessing risks, creating summaries from changesets) and 'when' (explicit 'Use when' clause with triggers, plus helpful 'DO NOT use when' clauses that disambiguate from related skills like catchup and review-core).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'diff analysis', 'changeset review', 'risk assessment', 'change categorization', 'release preparation', 'change summary', 'git diff' — these are terms users would naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

The description explicitly differentiates itself from related skills (catchup for quick context, review-core for full PR review) with 'DO NOT use when' clauses, creating a clear niche for systematic change analysis with risk scoring.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is well-organized as a routing/orchestration document with good progressive disclosure, but it lacks actionability—every substantive piece of methodology is deferred to external modules with no inline examples, concrete commands, or sample outputs. The 4-step workflow provides structure but each step is too abstract to be independently useful without the referenced modules.

Suggestions

Add at least one concrete example of a categorized change output or risk assessment score inline, so the skill is useful even without loading all modules.

Include a sample summary format or template in Step 4 showing what the final deliverable should look like (e.g., a markdown snippet of a risk-assessed changelog).

Add validation criteria or decision points within steps—e.g., 'If more than 3 high-risk changes found, flag for senior review' or 'If categorization is ambiguous, default to the higher-impact category.'

DimensionReasoningScore

Conciseness

Mostly efficient but includes some unnecessary sections like 'Activation Patterns' and 'When to Use' that largely duplicate the frontmatter description. The 'Progressive Loading' section with module references is useful but the integration bullet points feel like padding without concrete details.

2 / 3

Actionability

The skill is almost entirely abstract direction with no concrete code, commands, or examples. Steps like 'Define comparison scope' and 'Group changes by semantic type' describe rather than instruct. All actual methodology is deferred to external modules without providing any inline executable guidance or example outputs.

1 / 3

Workflow Clarity

The 4-step methodology provides a clear sequence with named steps and TodoWrite checkpoints, which is good. However, each step lacks concrete validation criteria or feedback loops—there's no guidance on what to do if categorization is ambiguous or risk assessment reveals issues requiring re-evaluation.

2 / 3

Progressive Disclosure

Well-structured with clear overview, conditional module loading, and one-level-deep references to specific modules. The 'Always Load' vs 'Conditional Loading' distinction is a good pattern, and external modules are clearly signaled with their purpose.

3 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
majiayu000/claude-skill-registry
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.