CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/rules-distill

"Scan skills to extract cross-cutting principles and distill them into rules — append, revise, or create new rule files"

56

Quality

56%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description conveys a meta-level task of extracting principles from skills into rule files, but relies heavily on internal jargon and lacks explicit trigger guidance. It would benefit significantly from a 'Use when...' clause and more natural language that matches how users would request this functionality.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to extract common patterns from skills, consolidate rules, or update rule files based on existing skill content.'

Replace jargon like 'cross-cutting principles' with more natural terms users might say, such as 'common patterns', 'shared guidelines', or 'recurring conventions'.

Specify what 'skills' and 'rule files' refer to (e.g., SKILL.md files, .rules files) to improve distinctiveness and reduce ambiguity.

DimensionReasoningScore

Specificity

Names the domain (skills and rules) and some actions (scan, extract, distill, append, revise, create), but the language is somewhat abstract — 'cross-cutting principles' and 'distill into rules' are not fully concrete without more context about what 'skills' and 'rules' mean in this system.

2 / 3

Completeness

The description addresses 'what' (scan skills, extract principles, manage rule files) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' itself is also somewhat vague, warranting a 1.

1 / 3

Trigger Term Quality

The description uses internal/technical jargon like 'cross-cutting principles', 'rule files', and 'distill' which are unlikely to match natural user language. Users would more likely say things like 'update rules', 'extract patterns', or 'consolidate guidelines'.

1 / 3

Distinctiveness Conflict Risk

The concept of scanning skills and creating rule files is somewhat niche, but 'rules' and 'skills' are broad terms that could overlap with other meta/configuration skills. The lack of explicit file types or specific trigger terms increases conflict risk.

2 / 3

Total

6

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted skill with excellent actionability and workflow clarity — the three-phase process is clearly defined with concrete commands, detailed subagent prompts, and explicit user approval gates. The main weaknesses are moderate verbosity (some sections restate what's already evident from the workflow) and a monolithic structure that could benefit from splitting the subagent prompt and example into separate referenced files.

Suggestions

Consider extracting the subagent prompt template and the end-to-end example into separate referenced files (e.g., SUBAGENT_PROMPT.md, EXAMPLE.md) to improve progressive disclosure and reduce the main file's length.

Trim the 'Design Principles' section — most of these points are already demonstrated in the workflow itself and don't add new information for Claude.

DimensionReasoningScore

Conciseness

The skill is fairly detailed and well-structured, but includes some unnecessary explanation (e.g., the 'Design Principles' section restates things already clear from the workflow, and the 'When to Use' section is somewhat obvious). The verdict reference table and quality examples add value but the overall document could be tightened by ~20%.

2 / 3

Actionability

Provides concrete bash commands for scanning, a complete subagent prompt template, specific JSON output schemas, exact verdict categories with definitions, and a full end-to-end example showing the complete interaction flow. The guidance is specific and executable.

3 / 3

Workflow Clarity

The three-phase workflow (Inventory → Cross-read/Verdict → User Review) is clearly sequenced with explicit steps. The critical safety checkpoint — 'Never modify rules automatically. Always require user approval' — is prominently stated. The cross-batch merge step includes validation (re-check 2+ skills requirement). Error recovery is implicit but appropriate for this non-destructive workflow.

3 / 3

Progressive Disclosure

The content is entirely self-contained in one file at ~200+ lines. The subagent prompt, verdict reference table, JSON schemas, and full end-to-end example could be split into referenced files. However, the internal structure with clear headers and phases provides reasonable navigation within the single file.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents